iphone safari scaling weirdness January 27th, 2011

So, the iPhone 4 has a high resolution display: twice that of the original at 640 pixels wide vs 320.  For a variety of reasons, however, the CSS unit affectionately known as px stays fixed with respect to its physical size instead of being an exact representation of the actual dots on the screen.  This is actually all fine and good, because it means that the thing we think of as a “pixel” is still roughly the same size on the mobile device as it is when we see it on our computer monitor, meaning that we can actually read the text as expected without a magnifying glass.  It’s also how the W3C specs were designed, but can be counter-intuitive if you’ve thought for all this time that “px” = dot, which it does not, except on a large majority of the most common display devices: computer monitors.

The iPhone mostly hides all of this from you, presenting all coordinates on the device as being 320px wide vs the full resolution of 640 dots.  This should just be fine.  If you’ve got something higher resolution to display, either fractional units or imagery with a higher density of dots will be rescaled to preserve the distinction.

However, today I noticed one startling thing: If I use the Apple meta tags to force the webpage into native resolution, I get roughly twice the framerate for image manipulation.  Now, you are surely thinking, of course you do because you just removed a rescale operation from the pipeline.  But this is not the case.  The maps library is already rescaling all of the imagery for display and in theory, the iPhone should be just incorporating its own display scale settings into the transformation matrix, resulting in no further work.  However, if I run at native resolution, with image scaling being done by hand in JavaScript and asking the poor iPhone to juggle 4 times as many pixels, I get twice the speed.  The maps library precisely tracks my finger movements with no lag and feels completely native.

To be fair, I do not yet know whether this is actually a rendering issue or a problem with the touch events.  It almost seems like the touch events are being averaged too much when being delivered to an element which has the native 2:1 scale factor applied to it.  I haven’t been able to get precise measurements, but the event stream looks “coarser”, maybe only containing 1:8 the resolution of when running at native scale.

I did verify that using the CSS zoom property produces the same speedup.  For example, zooming a parent element to 50% and then sizing its child to twice normal size creates a high-resolution region on the screen and the events delivered to that region are crisp and precise.

Even though my first experiments with this made me think that the graphics rendering was actually binding up, the zooming and handling very large canvases kind of leads me to believe that there’s plenty of render bandwidth.  If the issue is just related to touch event averaging, then this means that we can get much more precise touch events out of WebKit by targeting a zoomed div.  It really shouldn’t make a difference, but I can imagine some engineers at Apple being faced with this new native zooming and dividing everything, including the internal averager that the touch processor uses, resulting in a touch event stream on hi-res displays in WebKit that is much coarser than it should be.

Stay tuned… with a solution in hand, now I just need to find out why it works.

This entry was posted on Thursday, January 27th, 2011 at 5:27 pm and is filed under blog, geeky, nanomaps. You can follow any responses to this entry through the RSS 2.0 feed.You can leave a response, or trackback from your own site.

No Responses

Leave a Reply