Categories
Tutorials

How Instruments can be used to fix a graphics performance issue

Lately, I have been investigating an issue a customer of mine’s app showed.

My customer’s app is a sort of PDF viewer that also allow to add annotations. Such annotations are not stored in the PDF itself, instead they are managed in a custom way and drawn on top of the PDF in a dedicated CATiledLayer-based view.

The issue was that after a couple of zoom-in/out operations, or alternatively after moving from one PDF to another, the CPU used to jump to 100% usage, even though no apparent operation was ongoing. This hampered a lot the overall experience of the app, since practically all graphics operations became extremely slow with the app stuck in that state. Curiously, other kind of operations, e.g., downloading a file, were not slowed down significantly.

The issue had quite a trivial cause, due to some “bad” programming (meaning that some obvious rule was not respected), but the interesting part in this is how I came to understand what was going on.

Instruments was the main tool that came to rescue, as you can imagine. The picture at the left shows the CPU Profiler tool output. You can see how the overall CPU usage goes to 100% at some point and stays there. The fundamental bits of information one can get from this output are the following:

  • there was something going wrong in the cleanup phase of a pthread lifecycle; knowing that the CATiledLayer used for optimised drawing uses threads, this was a hint at that something was not handled correctly in the drawing phase; hard to think at some CATiledLayer bug, but still a possibility;
  • furthermore, (while the program was running) the “self” field showed that there were very many calls being made to practically all symbols under “pthread_dst_cleanup” and that those calls would not halt for any reason;
  • among the calls being made repetitively, my attention got caught by those to FreeContextStack/PopContext/GetContextStack.

The last point was the key to understand that something in the handling of the Core Graphics context stack was not doing correctly. So I set up to investigate the custom drawing code and indeed what I found was a couple of unbalanced calls to UIGraphicsPushContext and UIGraphicsPopContext. Fixing this, removed the CPU utilisation issue.

As I said, the issue was caused by incorrect programming, but nevertheless catching it was an interesting experience.

Categories
Tutorials

What’s new in iOS 7 User Interface – Part 2: Deference

In a previous post, I began describing what changes the introduction of iOS 7 brought to iOS UI/UX dimension. In that post, I listed 4 main principles shaping the idea of iOS flat-UI:

    • Clarity
      Deference
      Depth
      Detail

Here I will try and clarify what a “deferent” UI should be. Again in the New Oxford American Dictionary, I found deference defined as: “humble submission and respect”. As Apple applies this concept to iOS, the subject of deference is meant to be the User Interface itself, while the object of deference is user content. This means that the User interface should not come in the way of user content; UI should not be prominent over content. Rather than that, it should exalt user content.

An example of deference as given by Apple can be found in its Calendar app. Specifically, as you can see in the image below, look at the search bar. In iOS 6 Calendar app, the space bar reduced the available space for user content. In iOS 7, the space bar as such disappears and it is replaced by a magnifier icon; when you tap on it, the search field appears inside of the navigation bar. The navigation bar itself changes its content to adapt to the new context by displaying two buttons: Today and Done.

Another example of deference is provided through the new Notes app. It must be said that the old Notes app was for sure one of the worst in the Apple pack. Here again we find the trick with the search bar disappearing, thus giving more space to content. But comparing two screenshots, it becomes apparent that in the new Notes app content is the king, while in the old one it was shadowed by several UI elements: the typeface used for notes; the strong colors both for the background and the text; grid lines, and so on.

Looking at the Notes app, it is interesting to note that flat-UI under iOS speak does not mean “no texture”. Indeed, the Notes app features a “realistic” (Apple wording) textured white background. It seems that what really matters is that “realistic” UI artifacts are “deferent”, as it happens with the background in the Notes app.

Finally, a great example of deference, i.e., content over UI, is found in the new Weather app. As you can see in the comparison below, gone is the card-like appearance; the only effect of this was some clutter and less usable space. Instead, we found a big background image to represent the current status; a big centered lettering specifying the current temperature; the larger available space allows to add a new textual, more explicit representation of the current weather status and to include one more hour n the detailed hourly forecast.

I hope I could make deference a bit easier to understand as a basic principle of iOS 7 UI. In a future post, I will take in exam the next principle: depth.

Categories
Tutorials

Hitchhiker’s guide to MKStoreKit

In-App Purchase is one of those great feature of the iOS ecosystem that I wish were easier to understand and implement in my apps. In the end it is just a machinery that will not really add value to my apps, and it would be great if the Apple motto “it just works” could be applied to StoreKit as well.

A very good read to start is this tutorial on Ray Wenderlish’s blog, which explains all the steps required, from setting up your in-app purchases items in itunesconnect to implementing MKStoreKit in your app.

Actually, so far I have found that the most convenient way to add In-App Purchase support to my apps is through Mugunth Kumar’s MKStoreKit. It makes things much easier and almost straightforward. On the other hand, MKStoreKit is presented by its author on a series of posts on his blog that are a bit sparse and fail somehow to give a quick view of the way you are supposed to make things work.

In this post, I am going to summarise the steps required to integrate MKStoreKit into your app for non-consumable non-renewable items. I will assume that all the App Store paraphernalia has been already dealt with; but if you are starting right now to use In-App Purchases, maybe you could read the first part of the aforementioned tutorial.

So, going to the meat of the thing, what you need to do to set up and use MKStoreKit in your app is:

  1. 1. in MKStoreConfigs.h define macros for all of your items: e.g.,


    #define kKlimtPictureSetId @"org.freescapes.jigsaw.klimt.pictureset"
    #define kKlimtAltPictureSetId @"org.freescapes.jigsaw.klimt.altpictureset"

  2. 2. create a MKStoreKitConfigs.plist file where you list all of your items; this could look like shown in the picture below.

     

  3. 3. in your app delegate call:

    [MKStoreManager sharedManager];

    in order to initialize MKStoreKit and give it time to retrieve info from the App Store while the app is initialising;
  4. 4. whenever you want to check if a feature has been bought, call:

    [MKStoreManager isFeaturePurchased:kKlimtPictureSetId]

     
  5. 5. when the user buys some feature, call:

    [[MKStoreManager sharedManager] buyFeature:kKlimtPictureSetId
    onComplete:...
    onCancelled:...];

  6. 6. to implement the required “restore purchases”, call:


    [[MKStoreManager sharedManager] restorePreviousTransactionsOnComplete:^()
    {
    [self handlePurchaseSuccess:nil];
    }
    onError:^(NSError* error)
    {
    [self handlePurchaseFailure:error];
    }]

This is all that there is to it! Really straightforward and nice.

Categories
Tutorials

iOS6: dynamic autorotation

One of the most intrusive changes brought by iOS6 is the way autorotation is handled in UIViewControllers. If you have an autorotating app for iOS5, you will need to change it to correctly support autorotation under iOS6. If you develop an app which is supposed to run both on iOS5 and iOS6, then you will have to handle autorotation in the old as well as the new way.

In this post, I am going to provide a simple solution to a problem which, as much as I have been seeing around me, has not an entirely trivial solution. The problem statement is the following: a view which is allowed to autorotate only under certain conditions; otherwise, it will be frozen (as far as autorotation is concerned).

E.g., you take a screenshot of your UI at a given moment, then display it, maybe applying some effects to it. If the device is rotated in this context, your overall UI will rotate accordingly, while the snapshot you took will not (it will still reflect the initial device orientation). Now, what you want is freezing autorotation while the snapshot is shown. Another example: on top of your fully “elastic” UI, you display some piece of information which is not meant to autorotate. Again, what you want is freezing autorotation while that piece of information is displayed.

Under iOS5, this was really straightforward, because each time an autorotation event is detected, UIkit sends your controllers the shouldAutorotateToInterfaceOrientation: message. There you have a chance to deny autorotating to a specific interface rotation according to your criteria.

Under iOS6, it is equally straightforward except for the unlucky naming of the cornerstone iOS6 autorotation method, namely shouldAutorotate. What that name leads you (or at least me) into thinking is that you can decide there whether (and when) your view can autorotate. Wrong. The shouldAutorotate method does actually respond to an optimisation purpose: if your view controller shouldAutorotate returns NO, then the framework will not forward any autorotation messages to it.

So, if you want to control the conditions under which your controllers autorotate, you will have to either leave that method undefined or define it as to always return YES. The real “meat” of automation control is thus given in the supportedInterfaceOrientations method. E.g., it could be defined as:

[sourcecode]

– (NSUInteger)supportedInterfaceOrientations {

if ([self canAutorotateNow])
return UIInterfaceOrientationMaskAll;

if (UIInterfaceOrientationIsLandscape([UIApplication sharedApplication].statusBarOrientation))
return UIInterfaceOrientationMaskLandscape;
return UIInterfaceOrientationMaskPortrait;
}
[/sourcecode]

You see, the idea is checking if the autorotation is frozen at the moment supportedInterfaceOrientations is called; if it is, then only return as a supported orientation the one corresponding to the current status bar orientation.

Categories
Tutorials

Playing a secondary theme in Cocos2D

Cocos2D offers CocosDenshion, an easy to use framework to work with audio.

CocosDenshion allows you to play a background music and then some effects on top of that. This is clearly aimed at games and it usually works really well. The difference between background music and effects is summarized as follows:

  1. background music is thought of as potentially of long duration, so it is handled as a long audio stream;
  2. background music is thought of as continuous, so it is played in exclusive mode;
  3. effects are thought of as smaller in size, so they are loaded into memory entirely;
  4. effects are by their definition itself aimed at being played many times, so they are cached into memory for reuse;
  5. effect can be mixed to background music, so they do not “steal” the audio subsystem.

This works well until your effects are pretty small in size, otherwise your memory requirements will quickly grow. On iOS this is a no-no, since your app has very little memory to run with. Another case when CocosDenshion falls short is when you want your effects to be played continuously in a loop.

Say, for example, that you have a main background theme that is played in a loop, and then a secondary theme that you would like to be also played in a loop when your character enters some given state.

Suche scenario led me to create a small category on CocosDenshion SimpleAudioManager which defines four methods:


-(void) playForegroundMusic:(NSString*)filePath loop:(BOOL)loop;
-(void) stopForegroundMusic;
-(void) pauseForegroundMusic;
-(void) resumeForegroundMusic;

playForegroundMusic will simply play your secondary theme on top of your background music without claiming exclusive access to the audio subsystem. You can find it on my github together with the rest of my Cocos2D snippets.

 

Categories
Main Tutorials

cocos2d and the new retina iPad – take 2

I described in a previous post about cocos2d and the new retina iPad a quick solution to the problem of supporting higher-resolution images. The main objective of that work was to make easier re-building your app for submission on the App Store before  you could possibly provide higher-resolution images for the new iPad. I mentioned anyway that that approach was just a quick hack and not  a full answer to the need to provide 4 (four!) different versions for any of your artworks.

Now, I would like to quickly describe a different approach aimed at reducing that number to 2 (if you are willing to discontinue support for older iPhones and iPods, up to the iPhone 3GS/iPod Touch 3G). This is possible by:

  1. providing a “@2x” (aka “-hd”) version;
  2. providing a “@2x-ipad” (aka “-hd-ipad”) version;
  3. making the non-retina iPads (iPad 1 and 2) use the “@2x” version by default.

This is made possible through a new category I wrote that you can add to your project to transparently get the behavior described above.

The code

As you can see, the changes are pretty trivial. The only thing you have to do is inspect the two constants defined at the beginning of the file to check that file suffixes are ok with your conventions.

[sourcecode]

//
// CCFileUtils+SDSDeviceSuffix.m
// MantraPhoneTest
//
// Created by sergio on 3/19/12.
// Copyright 2012 Sergio De Simone, Freescapes Labs. All rights reserved.
//

#import "cocos2d.h"

#define CC_IPAD_DISPLAY_FILENAME_SUFFIX @"~ipad"
#define CC_RETINA_IPAD_DISPLAY_FILENAME_SUFFIX @"-hd~ipad"

#ifdef CC_RETINA_IPAD_DISPLAY_FILENAME_SUFFIX

//////////////////////////////////////////////////////////////////////////////////////////////
//////////////////////////////////////////////////////////////////////////////////////////////
//////////////////////////////////////////////////////////////////////////////////////////////
@implementation CCFileUtils (SDSDeviceSuffix)

//////////////////////////////////////////////////////////////////////////////////////////////
+ (NSFileManager*)localFileManager {
static NSFileManager *__localFileManager = nil;

if (!__localFileManager)
__localFileManager = [[NSFileManager alloc] init];
return __localFileManager;
}

//////////////////////////////////////////////////////////////////////////////////////////////
+ (NSString*)getPathForSuffix:(NSString*)path suffix:(NSString*)suffix {

NSString *pathWithoutExtension = [path stringByDeletingPathExtension];
NSString *name = [pathWithoutExtension lastPathComponent];

//– check if path already has the suffix.
if( [name rangeOfString:suffix].location != NSNotFound ) {

CCLOG(@"cocos2d: WARNING Filename(%@) already has the suffix %@. Using it.", name, suffix);
return path;
}

NSString *extension = [path pathExtension];

if( [extension isEqualToString:@"ccz"] || [extension isEqualToString:@"gz"] )
{
// All ccz / gz files should be in the format filename.xxx.ccz
// so we need to pull off the .xxx part of the extension as well
extension = [NSString stringWithFormat:@"%@.%@", [pathWithoutExtension pathExtension], extension];
pathWithoutExtension = [pathWithoutExtension stringByDeletingPathExtension];
}

NSString *retinaName = [pathWithoutExtension stringByAppendingString:suffix];
retinaName = [retinaName stringByAppendingPathExtension:extension];

if( [[self localFileManager] fileExistsAtPath:retinaName] )
return retinaName;

CCLOG(@"cocos2d: CCFileUtils: Warning HD file not found (%@): %@", suffix, [retinaName lastPathComponent] );

return nil;
}

//////////////////////////////////////////////////////////////////////////////////////////////
+ (NSString*)getDoubleResolutionImage:(NSString*)path {

#if CC_IS_RETINA_DISPLAY_SUPPORTED

NSString * retinaPath;

if (UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPad) {

if (CC_CONTENT_SCALE_FACTOR() == 2) {
if ((retinaPath = [self getPathForSuffix:path suffix:CC_RETINA_IPAD_DISPLAY_FILENAME_SUFFIX])) {
return retinaPath;
} else if ((retinaPath = [self getPathForSuffix:path suffix:CC_IPAD_DISPLAY_FILENAME_SUFFIX])) {
return retinaPath;
} else if ((retinaPath = [self getPathForSuffix:path suffix:CC_RETINA_DISPLAY_FILENAME_SUFFIX])) {
return retinaPath;
}
} else {
if ((retinaPath = [self getPathForSuffix:path suffix:CC_IPAD_DISPLAY_FILENAME_SUFFIX])) {
return retinaPath;
} else if ((retinaPath = [self getPathForSuffix:path suffix:CC_RETINA_DISPLAY_FILENAME_SUFFIX])) {
return retinaPath;
}
}

} else {

if (CC_CONTENT_SCALE_FACTOR() == 2) {
if ((retinaPath = [self getPathForSuffix:path suffix:CC_RETINA_DISPLAY_FILENAME_SUFFIX])) {
return retinaPath;
}
}
}

#endif // CC_IS_RETINA_DISPLAY_SUPPORTED

return path;
}

@end

#endif

[/sourcecode]

 

It is available on github and free for your use.

Categories
Main Tutorials

Boundary value analysis for bug fixing

Anyone who has worked in quality assurance or is serious about testing knows about a technique used to define test cases that is called boundary value analysis. Here is how Wikipedia describes it:

Boundary value analysis is a software testing technique in which tests are designed to include representatives of boundary values. […] Since these boundaries are common locations for errors that result in software faults they are frequently exercised in test cases.

In short: faults like boundaries.

This principle has been of great help to me lately when investigating about a curious bug that affected a customer of mine, and, what’s more, only one of its iPads. It was a nasty bug, of the kind that is not easily reproducible but keeps popping up from time to time.
Usually what you do in such cases is asking the customer for trace logs, or, if none is available, ask her to describe how she got there and so on, in an effort to shed some light on the apparent randomness of that behavior. This approach only rarely succeeds, unfortunately, due to a multiplicity of factors (logs are not meaningful or not available, the customer is not able to describe correctly what she did and so on).

In such cases, which usually lead close to despair and to many hours sitting in front of a monitor in frustration, code review is really the only way to go. Still, in the face of a “perfect” code, where you can find no evident problem, what you really need is some criteria to guide your search. This is where boundary value analysis comes into the picture.

If you like details, to better grasp the situation I am describing, here they are: the app was a simple, animated, CSS3 clock; its main feature was it’s unique, copyrighted design, that you can appreciate below, and it sported a continuous shift of the hands and some specific lightning to reproduce a realistic shadow effect of the clock hands against the background. What happened is that sometimes, launching the app, the seconds hand shadow, and only that, behaved in a crazy way. All of the other clock hands and their shadows continued to work correctly.


What was really striking at that was that all the three hands and their shadows used the same CSS3 animation; so why, in the first place, just one was not working. So this had to do with the specific position of the seconds hand shadow at the moment when the animation started, but, even more puzzling, that was exactly the same initial position of the seconds hands. And this one was always working smoothly. I was really lost.

When I started to review my code, I immediately noticed that there were indeed a few boundaries I could inspect. The animation was defined in terms of an interpolated rotation across four cardinal points. At each cardinal point, specified by its angle, the position of the clock hand and its shadow was defined in terms of rotation and translation respect to the 12-o’clock normal position.

Based on this, I designed a test to stress the animation when the starting time was close to those cardinal points. The test repeatedly started the animation by progressively advancing the start time a bit. It turns out that this approach was right, since it quickly showed that there was a tiny interval around one of those cardinal points where the seconds hand and its shadow were almost overlapped; almost: their distance was close to zero and definitely negligible, but definitely not zero (think 1.0e-16). This led to a number which was not proper CSS3, so the animation failed.

Rounding that tiny number to zero fixed the bug.

Summing things up: when the only resort you have to try and find a nasty bug is code review, boundary value analysis can be your valuable friend.

Categories
Main Tutorials

Smoothing CCGridActions in Cocos2D for iPhone

CCGridActions are a great feature of cocos2d. They provide nice 2D and “3D” effects that you can use to make your app or game appearance more appealing. I have already written about them, mostly about the high cost the exact in terms of CPU time (and FPS), what makes them barely usable in a real, sufficiently complex scenario. In a recent post of mine, I described a solution to this issue, which though not a full replacement for “real” grid actions, makes them usable, fast, and not at all demanding in terms of processing power.

Here, I would like to point out to another glitch that grid actions may have, and hint at a solution to it. This time, the glitch has nothing to do with the architecture of grid actions; rather, it is something that has to do with the details of implementation of particular actions. First of all, let’s try and understand what glitch I am talking about: have a look at the video below.

Liquid Effect

As you can see, this is a “liquid” deformation of a sprite, trying to resemble the effect that would be seen if the sprite were a piece of fabric floating on water. Pay attention to the moment when the deformation begins, and to the moment when it ends. The deformation effect is repeated several times, so that it be easier for you to catch the “jump” that the sprite  does at those times. Now, to see what I am aiming at, have a look at the second video below, where you will see what the deformation effect will look like at the end of the post.

Improved Liquid Effect

As you see, the start and end of the deformation are much smoother. Now, to the code!

Without going into much detail, the method responsible for the deformation is:

[sourcecode]
– (void)update:(ccTime)time {
int i, j;
for (i = 1; i < gridSize_.x; i++) {
for( j = 1; j < gridSize_.y; j++ ) {
ccVertex3F v = [self originalVertex:ccg(i,j)];
v.x += (time-duration_)/duration_*(sinf(time*(CGFloat)M_PI*waves*2 + v.x * .01f) * amplitude * amplitudeRate);
v.y += (time-duration_)/duration_*(sinf(time*(CGFloat)M_PI*waves*2 + v.y * .01f) * amplitude * amplitudeRate);
[self setVertex:ccg(i,j) vertex:v];
}
}
}
[/sourcecode]

You’ll notice the two statements that are actually deforming, at any given time, each of the vertices that make up the sprite Open GL texture definition:

[sourcecode]
v.x += (time-duration_)/duration_*(sinf(time*(CGFloat)M_PI*waves*2 + v.y * .01f) * amplitude * amplitudeRate);
v.y += (time-duration_)/duration_*(sinf(time*(CGFloat)M_PI*waves*2 + v.x * .01f) * amplitude * amplitudeRate);
[/sourcecode]

What happens here is that the vertices position is made follow a sinusoidal wave. So, the first important point you have to stick to is: the overall duration of the action should be an integral multiple of the sinusoidal period:

[sourcecode]
CCWaves* effect = [CCSequence actions:
[CCLiquid actionWithWaves:2
amplitude:10
grid:grid
duration:kDuration],
[CCStopGrid action],
nil];
[node runAction:effect];
[/sourcecode]

the “waves” parameter represents the number of oscillations per second; so, its inverse, 1/2.0, is the period. Any, integral multiple of it will do: 2.0, 2.1; but not 1.7.

The second thing we can notice is the phase inter-modulation that is applied; streamlining the assignment, we have:

[sourcecode]
v.x += K * sinf(2*pi*w*t + v.y * k);
v.y += K * sinf(2*pi*w*t + v.x * k);
[/sourcecode]

where k and K are both constants. Now, you clearly see the problem with the phase: at time 0, we have:

[sourcecode]
v.x += K * sinf(v.y * k);
v.y += K * sinf(v.x * k);
[/sourcecode]

This means that at the very start of the deformation, we already have a non-zero value for the vertices deformation. On the contrary, to get a smooth deformation, we want those values also be increased smoothly. In other words, at time 0, v.x and v.y shall be zero, and then grow from there up to its maximum.

The solution to this is applying a transformation to the phase shift, so that it is not fixed, i.e. only depending on the vertices coordinates, but also from the actual moment in time. The kind of transformation we would like to have is:

Where T is the period, x is the time, and N is a constant that will make the curve more or less flat. The following image shows a sample of the curve for T==2 and N==3.

Taking into account that the time argument to the update method is normalized, i.e, varies between 0 and 1, the following simplified formula can be applied:

So we have:

[sourcecode]

– (void)update:(ccTime)time {
int i, j;
float c = 4*(1/4.0 – (time-1/2.0) * (time-1/2.0));
for (i = 1; i < gridSize_.x; i++) {
for( j = 1; j < gridSize_.y; j++ ) {
ccVertex3F v = [self originalVertex:ccg(i,j)];
float dx = (time-duration_)/duration_*(sinf(time*(CGFloat)M_PI*waves*2 + v.x * .01f * c) * amplitude * amplitudeRate);
float dy = (time-duration_)/duration_*(sinf(time*(CGFloat)M_PI*waves*2 + v.y * .01f * c) * amplitude * amplitudeRate);
v.x += dy;
v.y += dx;
[self setVertex:ccg(i,j) vertex:v];
}
}
}
[/sourcecode]

Where a new variable makes its entry:

[sourcecode]

float c = 4*(1/4.0 – (time-1/2.0) * (time-1/2.0));

[/sourcecode]

which used as a multiplier for the phase shift, gives the result shown in the second video above. Worth to notice that it is a simple parabolic curve; indeed, even extracting the square root from it so to make it flatter would not produced enough smoothing.

Categories
Tutorials

How to give UIWebView Rounded Corners and a Shadow

If you would like to give your UIWebView rounded corners, you can find several recipes on the web, all ending up in this code snippet:


webView_.layer.cornerRadius = 10;
webView_.clipsToBounds = YES;

This will produce a nice rounded corner web view like in the picture below.

Everything wonderful. Now, what about a shadow below the web view? Would not it be even more beautiful? So, let’s tweak further the CALayer associated to the web view so that we can get a shadow almost for free (courtesy of Mike Nachbaur):


webView_.layer.shadowColor = [UIColor blackColor].CGColor;
webView_.layer.shadowOpacity = 0.7f;
webView_.layer.shadowOffset = CGSizeMake(10.0f, 10.0f);
webView_.layer.shadowRadius = 5.0f;
webView_.layer.masksToBounds = NO;
UIBezierPath *path = [UIBezierPath bezierPathWithRect:_label.bounds];
webView_.layer.shadowPath = path.CGPath;

Anyway, the result is not as expected:

Ahemmm… the shadow did away with the rounded corners…

The trouble lies with the CALayer‘s `masksToBounds` option conflicting with the UIWebView‘s`clipsToBounds`…

What comes to the rescue is the fact that `UIWebView` is just a web-wrapper around a `UIScrollView` which is in charge of displaying the rendered content… this is where the rounding has to be done actually. So let’s try it this way:


webView_.layer.cornerRadius = 10;
for (UIView* subview in webView_.subviews)
subview.layer.cornerRadius = 10;

webView_.layer.shadowColor = [UIColor blackColor].CGColor;
webView_.layer.shadowOpacity = 0.7f;
webView_.layer.shadowOffset = CGSizeMake(10.0f, 10.0f);
webView_.layer.shadowRadius = 5.0f;
webView_.layer.masksToBounds = NO;
UIBezierPath *path = [UIBezierPath bezierPathWithRect:_label.bounds];
webView_.layer.shadowPath = path.CGPath;

Now the result is what was to be hoped…

Categories
Tutorials

Pinching and Panning with Cocos2s and UIGestureRecognizer

I have had a hard time lately to get zooming and  scroll to work in a Cocos2d based  app that I am writing. Indeed, there are many tutorials that you can find on the Web, but none that could really give me that crucial hint at how to make everything work out well. So, here go my findings, hoping that it may be useful to other developers that find themselves stuck with this.

First of all let me clarify that there are two kinds of problems in this:

  1. getting Cocos2D to recognize and dispatch multitouch events to your layer class;
  2. defining the geometrical transformation that gets you a smooth zooming or scrolling (taking into account the boundaries of your layer hierarchy).
As to 1., I understand that in principle Cocos2D supports ccTouches*:withEvent& set of methods (where * can be: Began, Moved, Ended); the only problem is it seems it is not easy to get them effectively called. Or, at least, not as straightforward as using the targeted versions of the same methods (which handle single touches). I am not going to use those methods or enter into the topic of how enabling them; rather, I rely on the more advanced gesture recognizer that the iOS SDK offers since version 3.2. I think this is the way to go for doing multi gesture handling, since gesture recognizers make everything easier.
As to 2., I understand that handling geometric transformation can be done in different ways; and I am no specialist in geometric transformation. So, I am only aiming at giving an example of how it can be done in a generic (I hope) fashion.

The Context

First of all, some information about the cocos2d scene that I am trying to zooming and dragging around. It has got a main CCLayer that acts as a container for multiple CCLayers.
This design is necessary because I have a UI layer that I do want to be fixed; a large background layer (some 4000×1000 pixels), where some animations take place; and a interactive layer where the user can interact with a few sprites by moving them around.
Nothing really fancy here, but as the user moves around the sprites, the background also moves according to its own logic; this means that the layers composing my scene get displaced respect to one another.

The Objective

At some point, besides dragging the sprites, zooming and panning also get enabled, so the user can move around the overall scene, or scaling it in and out to see it fully or in part.
The objectives here are:

  • for dragging: move around the whole scene with an inertial effect when the touch ends (so the image keeps scrolling a bit in the same direction of movement); adding a spring effect to make the scene bounce when it is dragged beyond its physical boundaries;
  • for pinching: zooming in and out without displacing (if possible) the pinch center and without the scene to ever reveal the black background behind it.

The Code

First of all, in my container layer’s init method, I create and attach two gesture recognizers:
[sourcecode language=”c”]
– (void)init {
….
_panGestureRecognizer = [[UIPanGestureRecognizer alloc] initWithTarget:self action:@selector(handlePanFrom:)];

_pinchGestureRecognizer = [[UIPinchGestureRecognizer alloc] initWithTarget:self action:@selector(handlePinchGesture:)];
….
}
[/sourcecode]
The handlePanFrom method is also added to the same class:
[sourcecode language=”c”]
– (void)handlePanFrom:(UIPanGestureRecognizer*)recognizer {

if (recognizer.state == UIGestureRecognizerStateBegan) {
//– nothing to do here
} else if (recognizer.state == UIGestureRecognizerStateChanged) {

//– calculate the nominal displacement of the layer
CGPoint translation = [recognizer translationInView:recognizer.view];
translation = ccp(translation.x, -translation.y);
[recognizer setTranslation:CGPointZero inView:recognizer.view];

self.position = ccp(self.position.x + translation.x, self.position.y);

} else if (recognizer.state == UIGestureRecognizerStateEnded) {

CGPoint velocity = [recognizer velocityInView:recognizer.view];
//– first, calculate the rect of the layer we want to center
//– then, calculate the required displacement so that it fills up the screen
CGRect rect = [self boundedRectForLayer:[self fonsLayer]];
CGPoint delta = [self displacementForRect:rect withVelocity:velocity andInertia:0.2];

CCMoveBy* moveBy = [CCMoveBy actionWithDuration:0.2 position:delta];
[self stopAllActions];
[self runAction:[CCEaseElastic actionWithAction:moveBy period:2]];
}
}
[/sourcecode]
and finally the code for the pinch gesture handler:
[sourcecode language=”c”]
– (void)handlePinchGesture:(UIPinchGestureRecognizer*)gestureRecognizer {

const CGFloat kMaxScale = 1.0;
const CGFloat kMinScale = [self fonsLayer].scaleToFit;
const CGFloat kSpeed = 0.1;

_pinchGestureRecognizer.cancelsTouchesInView = YES;

if(gestureRecognizer.state == UIGestureRecognizerStateBegan) {

//– let’s calculate the anchorPoint based on the pinch center so that zooming in/out is center
CGPoint location = [gestureRecognizer locationInView:[[CCDirector sharedDirector] openGLView]];
CGPoint glLocation = [[CCDirector sharedDirector] convertToGL:location];
CGPoint locationInSelf = [self convertToNodeSpace:glLocation];

if (gestureRecognizer.velocity < 0 &amp;amp;&amp;amp; self.scale > kMinScale)
self.anchorPoint = ccp(locationInSelf.x/self.contentSize.width, locationInSelf.y/self.contentSize.height);
}

if (gestureRecognizer.state == UIGestureRecognizerStateBegan ||
gestureRecognizer.state == UIGestureRecognizerStateChanged) {

//– if we have reached the boundaries of the zooming, do nothing
if ((gestureRecognizer.velocity <= 0 &amp;amp;&amp;amp; self.scale <= kMinScale) || (gestureRecognizer.velocity >= 0 &amp;amp;&amp;amp; self.scale >= kMaxScale))
return;

//– calculate the new scale within its limits
CGFloat newScale = self.scale * (1 + gestureRecognizer.velocity * kSpeed);
newScale = MIN(kMaxScale, MAX(newScale, kMinScale));
self.scale = newScale;

//– first, calculate the rect of the layer we want to center
//– then, calculate the required displacement so that it fills up the screen
CGRect rect = [self boundedRectForLayer:[self fonsLayer]];
CGPoint delta = [self displacementForRect:rect];

self.position = ccpAdd(self.position, delta);

} else if (gestureRecognizer.state == UIGestureRecognizerStateEnded) {
//– nothing to do here; otherwise, calc rect/delta like above, then apply action like when panning
}
[/sourcecode]
The above code use methods that are critical for the correct behavior of panning or pinching. Here they are:
[sourcecode language=”c”]
///////////////////////////////////////////////////////////////
/////// given a layer, it calculates its nonimal rect, then maps it to the world space;
/////// the calculated rect represents the position of the layer on screen
///////////////////////////////////////////////////////////////
– (CGRect)boundedRectForLayer:(CCNode*)layer {
CGRect rect;
ret = CGRectMake(0, 0, layer.contentSizeInPixels.width, layer.contentSizeInPixels.height);
return CGRectApplyAffineTransform(rect, [layer nodeToWorldTransform]);
}

///////////////////////////////////////////////////////////////
/////// calculates the displacement required to make rect completely cover the screen area,
/////// so that no portion of the background is revealed; if velocity and inertia are given,
/////// the displacement is added a component that allows to ease in or out the movement.
///////////////////////////////////////////////////////////////
– (CGPoint)displacementForRect:(CGRect)rect withVelocity:(CGPoint)velocity andInertia:(float)inertia {

CGSize winSize = [[CCDirector sharedDirector] winSize];
float hShootEnd = rect.origin.x + rect.size.width – winSize.width;
float hShootStart = rect.origin.x;
CGPoint extraScroll = ccpMult(velocity, inertia);

if (velocity.x > 0)
extraScroll.x = MIN(extraScroll.x, -hShootStart);
else
extraScroll.x = MAX(extraScroll.x, -hShootEnd);

return ccp(-MIN(hShootEnd, MAX(hShootStart, -extraScroll.x)),
-MIN(rect.origin.y + rect.size.height – winSize.height, MAX(rect.origin.y, 0)));
}

///////////////////////////////////////////////////////////////
/////// helper method
///////////////////////////////////////////////////////////////
– (CGPoint)displacementForRect:(CGRect)rect {
return [self displacementForRect:rect withVelocity:ccp(0,0) andInertia:0];
}
[/sourcecode]
Final Note

I hope that the code it is auto explicative, given its factorization and the few comments it has.