Touch Events IPHONE APPS

Touch events in iPhone OS are based on a Multi-Touch model. Instead of using a mouse and a keyboard, users touch the screen of the device to manipulate objects, enter data, and otherwise convey their intentions. iPhone OS recognizes one or more fingers touching the screen as part of a Multi-Touch sequence. This sequence begins when the first finger touches down on the screen and ends when the last finger is lifted from the screen. iPhone OS tracks fingers touching the screen throughout a multi-touch sequence and records the characteristics of each of them, including the location of the finger on the screen and the time the touch occurred. Applications often recognize certain combinations of touches as gestures and respond to them in ways that are intuitive to users, such as zooming in on content in response to a pinching gesture and scrolling through content in response to a flicking gesture.

Many classes in UIKit handle multi-touch events in ways that are distinctive to objects of the class. This is especially true of subclasses of UIControl, such as UIButton and UISlider. Objects of these subclasses— known as control objects— are receptive to certain types of gestures, such as a tap or a drag in a certain direction; when properly configured, they send an action message to a target object when that gesture occurs. Other UIKit classes handle gestures in other contexts; for example, UIScrollView provides scrolling behavior for table views, text views, and other views with large content areas.

Some applications may not need to handle events directly; instead, they can rely on the classes of UIKit for that behavior. However, if you create a custom subclass of UIView—a common pattern in iPhone OS development— and if you want that view to respond to certain touch events, you need to implement the code required to handle those events. Moreover, if you want a UIKit object to respond to events differently, you have to create a subclass of that framework class and override the appropriate event-handling methods.

Events and Touches

In iPhone OS, a touch is the presence or movement of a finger on the screen that is part of a uniquemulti-touch sequence. For example, a pinch-close gesture has two touches: two fingers on the screen moving toward each other from opposite directions. There are simple single-finger gestures, such as a tap, or a double-tap, or a flick (where the user quickly swipes a finger across the screen). An application might recognize even more complicated gestures; for example, an application might have a custom control in the shape of a dial that users “turn” with multiple fingers to fine-tune some variable.

An event is an object that the system continually sends to an application as fingers touch the screen and move across its surface. The event provides a snapshot of all touches during a multi-touch sequence, most importantly the touches that are new or have changed for a particular view. A multi-touch sequence begins when a finger first touches the screen. Other fingers may subsequently touch the screen, and all fingers may move across the screen. The sequence ends when the last of these fingers is lifted from the screen. An application receives event objects during each phase of any touch.

Touches have both temporal and spatial aspects. The temporal aspect, called a phase, indicates when a touch has just begun, whether it is moving or stationary, and when it ends—that is, when the finger is lifted from the screen. A touch also has the current location in a view or window and the previous location (if any). When a finger touches the screen, the touch is associated with a window and a view and maintains that association throughout the life of the event. If multiple touches arrive at once, they are treated together only if they are associated with the same view. Likewise, if two touches arrive in quick succession, they are treated as a multiple tap only if they are associated with the same view.

A multi-touch sequence and touch phases

A multi-touch sequence and touch phases

In iPhone OS, a UITouch object represents a touch, and a UIEvent object represents an event. An event object contains all touch objects for the current multi-touch sequence and can provide touch objects specific to a view or window. A touch object is persistent for a given finger during a sequence, and UIKit mutates it as it tracks the finger throughout it. The touch attributes that change are the phase of the touch, its location in a view, its previous location, and its timestamp. Event-handling code evaluates these attributes to determine how to respond to the event.

Relationship of a UIEvent object and its UITouch objects

Relationship of a UIEvent object and its UITouch objects

The system can cancel a multi-touch sequence at any time and an event-handling application must be prepared to respond appropriately. Cancellations can occur as a result of overriding system events, such as an incoming phone call.

Event Delivery

The delivery of an event to an object for handling occurs along a specific path. As described in “Core Application Architecture”, when users touch the screen of a device, iPhone OS recognizes the set of touches and packages them in a UIEvent object that it places in the current application’s event queue. The event object encapsulates the touches for a given moment of a multi-touch sequence. The singleton UI Application object that is managing the application takes an event from the top of the queue and dispatches it for handling. Typically, it sends the event to the application’s key window—the window currently the focus for user events— and the UIWindow object representing that window sends the event to the first responder for handling. (The first responder is described in “Responder Objects and the Responder Chain.”)

An application uses hit-testing to find the first responder for an event; it recursively calls hit Test: with Event: on the views in the view hierarchy (going down the hierarchy) to determine the subview in which the touch took place. The touch is associated with that view for its lifetime, even if it subsequently moves outside the view. “Event Handling Techniques” discusses some of the programmatic implications of hit-testing.

The UI Application object and each UIWindow object dispatches events in the send Event: method. (Both classes declare an identically named method). Because these methods are funnel points for events coming into an application, you can subclass UI Application or UIWindow and override the sendEvent: method to monitor events or perform special event handling. However, most applications have no need to do this.

Responder Objects and the Responder Chain

A responder object is an object that can respond to events and handle them. UIResponder is the base class for all responder objects. It defines the programmatic interface not only for event handling but for common responder behavior. UIApplication, UIView, and all UIKit classes that descend from UIView (including UIWindow) inherit directly or indirectly from UIResponder.

The first responder is the responder object in the application (usually a UIView object) that is the current recipient of touches. A UIWindow objects sends the first responder an event in a message, giving it the first shot at handling the event. If the first responder doesn’t handle the event, it passes the event (via message) to the next responder in the responder chain to see if it can handle it.

The responder chain is a linked series of responder objects. It allows responder objects to transfer responsibility for handling an event to other, higher-level objects. An event proceeds up the responder chain as the application looks for an object capable of handling the event. The responder chain consists of a series of “next responders” in the following sequence:

  1. The first responder passes the event to its view controller (if it has one) and then on to its superview.
  2. Each subsequent view in the hierarchy similarly passes to its view controller first (if it has one) and then to its superview.
  3. The topmost enclosing view passes the event to the UIWindow object.
  4. The UIWindow object passes the event to the singleton UIApplication object.

If the application finds no responder object to handle the event, it discards the event.

Any responder object in the responder chain may implement a UIResponder event-handling method and thus receive an event message. But a responder may decline to handle a particular event or may handle it only partially. In that case, it can forward the event message to the next responder in a message similar to the following one:

Action messages also make use of the responder chain. When users manipulate a UIControl object such as button or page control, the control object (if properly configured) sends an action message to a target object. But if nil is specified as the target, the application initially routes the message as it does an event message: to the first responder. If the first responder doesn’t handle the action message, it sends it to its next responder, and so on up the responder chain.

Regulating Event Delivery

UIKit gives applications programmatic means to simplify event handling or to turn off the stream of events completely. The following list summarizes these approaches:

  • Turning off delivery of events. By default, a view receives touch events, but you can set its user Interaction Enabled property to NO to turn off delivery of events. A view also does not receive events if it’s hidden or if it’s transparent.
  • Turning off delivery of events for a period. An application can call the UI Application method begin Ignoring Interaction Events and later call the end Ignoring Interaction Events method. The first method stops the application from receiving touch event messages entirely; the second method is called to resume the receipt of such messages. You sometimes want to turn off event delivery when your code is performing animations.
  • Turning on delivery of multiple touches. By default, a view ignores all but the first touch during a multi-touch sequence. If you want the view to handle multiple touches you must enable multiple touches for the view. This can be done programmatically by setting the multiple Touch Enabled property of your view to YES, or in Interface Builder using the inspector for the related view.
  • Restricting event delivery to a single view. By default, a view’s exclusive Touch property is set to NO. If you set the property to YES, you mark the view so that, if it is tracking touches, it is the only view in the window that is tracking touches. Other views in the window cannot receive those touches. However, a view that is marked “exclusive touch” does not receive touches that are associated with other views in the same window. If a finger contacts an exclusive-touch view, then that touch is delivered only if that view is the only view tracking a finger in that window. If a finger touches a non-exclusive view, then that touch is delivered only if there is not another finger tracking in an exclusive-touch view.
  • Restricting event delivery to subviews. A custom UIView class can override hitTest: with Event: to restrict the delivery of multi-touch events to its subviews. See “Event-Handling Techniques” for a discussion of this technique.

Handling Multi-Touch Events

To handle multi-touch events, your custom UIView subclass (or, less frequently, your custom UIApplication or UIWindow subclass), must implement at least one of the UI Responder methods for event handling. The following sections describe these methods, discuss approaches for handling common gestures, show an example of a responder object that handles a complex sequence of multi-touch events, and suggest some techniques for event handling.

The Event-Handling Methods

During a multi-touch sequence, the application dispatches a series of event messages. To receive and handle these messages, the class of a responder object must implement at least one of the following methods declared by UIResponder:

The application sends these messages when there are new or changed touches for a given touch phase:

  • It sends the touchesBegan:withEvent: message when one or more fingers touch down on the screen.
  • It sends the touchesMoved:withEvent: message when one or more fingers move.
  • It sends the touchesEnded:withEvent: message when one or more fingers lift up from the screen.
  • It sends the touchesCancelled:withEvent: message when the touch sequence is cancelled by a system event, such as an incoming phone call.

Each of these methods is associated with a touch phase (for example,UI Touch Phase Began), which for any UI Touch object you can find out by evaluating its phase property.

Each message that invokes an event-handling method passes in two parameters. The first is a set of UITouch objects that represent new or changed touches for the given phase. The second parameter is a UIEvent object representing this particular event. From the event object you can get all touch objects for the event (allTouches) or a subset of those touch objects filtered for specific views or windows. Some of these touch objects represent touches that have not changed since the previous event message or that have changed but are in different phases.

A responder object frequently handles an event for a given phase by getting one or more of the UITouch objects in the passed-in set and then evaluating their properties or getting their locations. (If any of the touch objects will do, it can send the NSSet object an any Object message.) One important method is location InView:, which, if passed a parameter of self, yields the location of the touch in the responder object’s coordinate system (assuming the responder is a UIView object and the view passed as a parameter is not nil). A parallel method tells you the previous location of the touch (previous Location InView:). Properties of the UITouch instance tell you how many taps have been made (tapCount), when the touch was created or last mutated (timestamp), and what phase it is in (phase).

A responder class does not have to implement all three of the event methods listed above. For example, if it is looking for only fingers when they’re lifted from the screen, it need only implement touches Ended: with Event:.

If a responder creates persistent objects while handling events during a multi-touch sequence, it should implement touches Cancelled: withEvent: to dispose of those objects when the system cancels the sequence. Cancellation often occurs when an external event—for example, an incoming phone call—disrupts the current application’s event processing. Note that a responder object should also dispose of those same objects when it receives the last touchesEnded:withEvent: message for a multi-touch sequence. (See “Event-Handling Techniques” to find out how to determine the last touch-up in a sequence.)

Handling Single and Multiple Tap Gestures

A very common gesture in iPhone applications is the tap: the user taps an object with his or her finger. A responder object can respond to a single tap in one way, a double-tap in another, and possibly a triple-tap in yet another way. To determine the number of times the user tapped a responder object, you get the value of the tapCount property of a UITouch object.

The best places to find this value are the methods touches Began: with Event: and touches Ended :with Event: In many cases, the latter method is preferred because it corresponds to the touch phase in which the user lifts a finger from a tap. By looking for the tap count in the touch-up phase (UI Touch Phase Ended), you ensure that the finger is really tapping and not, for instance, touching down and then dragging.

Listing shows the way to determine whether a double-tap occurred in one of your views.

Detecting a double-tap gesture

A complication arises when a responder object wants to handle a single-tap and a double-tap gesture in different ways. For example, a single tap might select the object and a double tap might display a view for editing the item that was double-tapped. How is the responder object to know that a single tap is not the first part of a double tap? Here is how a responder object could handle this situation using the event-handling methods just described:

  1. In touches Ended :with Event:, when the tap count is one, the responder object sends itself a perform Selector: with Object: afterDelay: message. The selector identifies another method implemented by the responder to handle the single-tap gesture; the second parameter is an NSValue or NS Dictionary object that holds the related UI Touch object; the delay is some reasonable interval between a single- and a double-tap gesture.
  2. In touches Began: with Event: if the tap count is two, the responder object cancels the pending delayed-perform invocation by sending itself a cancel Previous Perform Requests With Target: message. If the tap count is not two, the method identified by the selector in the previous step for single-tap gestures is invoked after the delay.
  3. In touches Ended: with Event:, if the tap count is two, the responder performs the actions necessary for handling double-tap gestures.

Detecting Swipe Gestures

Horizontal and vertical swipes are a simple type of gesture that you can track easily from your own code and use to perform actions. To detect a swipe gesture, you have to track the movement of the user’s finger along the desired axis of motion, but it is up to you to determine what constitutes a swipe. In other words, you need to determine whether the user’s finger moved far enough, if it moved in a straight enough line, and if it went fast enough. You do that by storing the initial touch location and comparing it to the location reported by subsequent touch-moved events.

Listing shows some basic tracking methods you could use to detect horizontal swipes in a view. In this example, the view stores the initial location of the touch in a start Touch Position member variable. As the user’s finger moves, the code compares the current touch location to the starting location to determine whether it is a swipe. If the touch moves too far vertically, it is not considered to be a swipe and is processed differently. If it continues along its horizontal trajectory, however, the code continues processing the event as if it were a swipe. The processing routines could then trigger an action once the swipe had progressed far enough horizontally to be considered a complete gesture. To detect swipe gestures in the vertical direction, you would use similar code but would swap the x and y components.

Tracking a swipe gesture in a view

Handling a Complex Multi-Touch Sequence

Taps and swipes are simple gestures. Handling a multi-touch sequence that is more complicated— in effect, interpreting an application-specific gesture—depends on what the application is trying to accomplish. You may have to track all touches through all phases, recording the touch attributes that have changed and altering internal state appropriately.

The best way to convey how you might handle a complex multi-touch sequence is through an example. Listing shows how a custom UIView object responds to touches by animating the movement of a “Welcome” placard around the screen as a finger moves it and changing the language of the welcome when the user makes a double-tap gesture. (The code in this example comes from the MoveMe sample code project, which you can examine to get a better understanding of the event-handling context.)

Handling a complex multi-touch sequence

Event-Handling Techniques

Here are some event-handling techniques you can use in your code.

  • Tracking the mutations of UITouch objects
    In your event-handling code you can store relevant bits of touch state for later comparison with the mutated UITouch instance. As an example, say you want to compare the final location of each touch with its original location. In the touches Began: with Event: method, you can obtain the original location of each touch from the location InView: method and store those in a CF Dictionary Ref opaque type using the addresses of the UITouch objects as keys. Then, in the touches Ended: with Event: method you can use the address of each passed in UITouch object to obtain the object’s original location and compare that with its current location. (You should use a CF Dictionary Ref type rather than an NS Dictionary object; the latter copies its items, but the UITouch class does not adopt the NSCopying protocol, which is required for object copying.)
  • Hit-testing for a touch on a subview or layer

A custom view can use the hitTest: with Event: method of UIView or the hitTest: method of CALayer to find the subview or layer that is receiving a touch, and handle the event appropriately. The following example detects when an “Info” image in a layer of the custom view is tapped.

##STARRT## - (void)touchesEnded:(NSSet*)touches withEvent:(UIEvent*)event { CGPoint location = [[touches anyObject] locationInView:self]; CALayer *hitLayer = [[self layer] hitTest:[self convertPoint:location fromView:nil]]; if (hitLayer == infoImage) { [self displayInfo]; } }

If you have a custom view with subviews, you need to determine whether you want to handle touches at the subview level or the superview level. If the subviews do not handle touches by implementing touches Began: with Event:, touches Ended: with Event:, or touches Moved: with Event: then these messages propagate up the responder chain to the superview. However, because multiple taps and multiple touches are associated with the subviews where they first occurred, the superview won’t receive these touches. To ensure reception of all kinds of touches, the superview should override hitTest:withEvent: to return itself rather than any of its subviews.

  • Determining when the last finger in a multi-touch sequence has lifted

When you want to know when the last finger in a multi-touch sequence is lifted from a view, compare the number of UITouch objects in the passed in set with the number of touches for the view maintained by the passed-in UIEvent object. For example:

All rights reserved © 2018 Wisdom IT Services India Pvt. Ltd Protection Status