Paratrooper Digital

Bubblebird TUIO library and gestures.

While the bubble bird library is a great implementation of TUIO protocol the gesture system currently has a few gotchas.

There are some simple gestures that use the gesture step class for defining what base gestures (tap, move). A three finger move gesture is one that has multiple steps that define it as 3 separate touch elements that all move.

Compound gesture(such as a rotate, pinch or other custom gestures) can have additional logic added to them. For instance a zoom gesture is a two finger move gesture that remembers the initial position of the two touch elements to compare an updated distance to for the purpose of knowing if one is pinching to zoom out or spreading to zoom in. Similarly a rotate gesture is a two finger gesture that remembers a starting position so the difference in angle can be calculated. This system is great for allowing one to create custom gestures, perhaps a three finger move left/right gesture. Other developers simply have to listen for that gesture to be dispatched from the object and act accordingly.

It is with these specific gestures that require memory of a previous state to act. It allows only one of these gestures to be detected at the same time.

Try it!

  • Create an application that has two Sprites with fills drawn to their graphics layers.
  • Add listeners for the OneFingerMoveGesture that trace out the target of the gesture (don’t forget to setup the TuioManager to dispatch this gesture).
  • Perform one finger move gestures on the two Sprites in your running application to see different targets.
  • Now change those to ScrollGesture that come with the library(don’t forget to setup the TuioManager to dispatch this gesture).
  • Update the listener function calls to change the alpha value of the Sprite; scroll up makes it more opaque, down for more transparent.
  • Perform two finger scroll up and down gestures on the two Sprites in your running application to see that only one sprite has it’s alpha updated.

If the requirements of your application are for…

  • a single user that understands that complex gestures of the same type can only be executed one at a time
  • only simple GestureStep based Gestures
  • only native MouseEvent or native TouchEvents to control the interface
then this method of gesture detection will work out great.

If the requirements find…

  • needing to support multiple users with compound gestures which need to remember additional starting data
then its good to architect your touch interface around that.

Some solutions to that situation include:

  • having each UI element that is receiving the gesture know the logic for what activates that gesture (this can lead to having multiple copies of similar code for gesture handling distributed throughout your application, not as preferred)
  • creating a more complicated data storage/cleanup method for storing the gesture meta data so that it doesn’t conflict as multiple gestures happen at the same time (better for centralizing gesture definitions, but adds complexity to the custom gesture management)

Comments are locked for this content.

About Nate Frank

Nate is currently a Senior Presentation Layer Architect at Razorfish Chicago. As an SPLA Nate: participates in technology leadership team and resource allocations, manage fulltime and contractor resources, represents technology for groups of brands across multiple clients, furthers development of standards within the office, architects project implementations and fosters community and mentoring.

View all posts by Nate Frank

Most recent posts

Categories