Basics of Multi-Touch Development
Multi-touch features greatly improve UX and user satisfaction. The technology allows users to interact with a platform using intuitive gestures with their fingers. Read this blog to learn about the science behind multi-touch technology, basic implementation, and what you need to know during development.
This article was first published on DevWerkz.
Consumer expectations are at an all-time high thanks to continuous upgrades on desktop and mobile devices. The constant tweaks and improvements made by technology giants have one common goal — to make the lives of users easier.
Gadgets and platforms with difficult UI are easily dropped for more intuitive and efficient counterparts in a heartbeat.
One of the most sought-after features in today’s context is multi-touch functionality. The overwhelming popularity of smartphones and tablets created a new avenue for UI development that overtakes keyboard shortcuts by a mile.
In this blog, we’re going to cover the basics of multi-touch development: what it is, how to implement it, and the things you should keep in mind while designing gestures permitted on your platform.
What is Multi-Touch?
Multi-touch technology is an innovation that allows devices to detect multiple points of contact at once.
Instead of being limited to swiping and tapping, multi-touch allows zooming, scrolling, selecting, and more. Certain gestures can also trigger menus or control panels to open.
Multi-touch is used both in touchscreen and TrackPad interfaces. These special surfaces have pressure-sensitive panels that respond to touch.
Pressure applied on a touchscreen or TrackPad disrupts electrical charges carried by the panels. Computers read this as input and execute the corresponding function.
Multi-touch is widely successful because it expands interface options, resulting in a more intuitive and seamless user experience.
Touch events, touch interfaces, and touch lists
To help us better understand how to apply multi-touch development to different projects, it’s important to be familiar with the terminologies in that domain.
First, we need to understand touch events. These are APIs that support multi-touch interactions on different platforms. Touch events have three interfaces:
- Touch — a single contact point on a touch-sensitive device
- TouchEvent — represents an event wherein the state of contact of a stylus or finger on a touch-sensitive device changes; detects the addition or removal of contact points and movement
- TouchList — essentially, the total number of contact points on the touch-sensitive surface
Basic touch event types are listed below. These are enough to support touch-based interactions and multi-touch gestures.
- touchstart — executed when a touch point is placed on the device
- touchmove — executed when a finger or stylus is moved along the touch surface
- touchend — executed when touch stimulus is removed from the touch surface
- touchcancel — disruption of touch functions due to incorrect application of stimuli (e.g., too many touch points on the screen)
Touch events come with the following lists:
- touches — a list of touch points currently in contact with the screen
- targetTouches — a list of touch points on the target DOM element
- changedTouches — a list of touch points whose items are dependent on their associated event type
Finally, the following list consists of objects with touch information:
- identifier — a number set to identify the current finger involved in the session
- target — a DOM element that is the target of action
- client — also called page or screen coordinates; signifies where on the screen the action took place
- radius coordinates — an ellipse marker that approximates a finger shape in contact with the touch-sensitive surface
How to implement multi-touch on web applications
Implementing multi-touch functions on web apps starts with the addEventListener function.
An event listener attaches an event handler to elements that are expected to receive touch events. Your code should contain the event name and the event handler. A third argument is also made to enable or disable certain functionalities depending on the position of the element in the DOM tree.
At this point, you can play around with designing your own touch gestures like pinch, scroll, rotate, and drag.
The development process essentially follows these basic steps:
- Create an event handler for each touch event type
- Specify the application’s gesture semantics
- Open the attributes of a touch point
- Turn off default behaviors
The last step is paramount to making the final product work smoothly. Default settings tend to mess with multi-touch gestures as they associate touch events with different functions.
For instance, the swipe gesture may cause a browser to zoom in when it’s meant to navigate through different pages. Another example is the overscroll effect on iOS devices. These behaviors may confuse consumers and negatively impact UX. Thankfully, they’re easily disabled by a few lines of code.
Things to consider when designing multi-touch features
To ensure that your platform accommodates multi-touch gestures well, there are a few things you need to consider, including:
Recall that touchend is the event sent when a finger is no longer in contact with the device’s surface. It’s an intuitive function that may slip your mind during development.
Using the handleEnd() function, your app will mark the end of an interaction and remove that point from the current touch list. This is especially important for functions like drag and drop wherein the event is executed after the finger has been removed from the DOM element.
Allowing users to cancel erroneous touch input is good for accessibility and overall user satisfaction. Rather than waiting for the command to follow through and then exiting, it’s better to have the option to abort the event immediately.
The idea is to remove the touch input from the ongoing touch list by activating touchcancel. In this case, the event handler is made using the handleCancel() function.
The difficulty in writing multi-touch applications lies in complex gestures. When an event requires more than one finger on the screen, you need to be more deliberate about your code so that unwanted commands don’t get triggered in the middle of completing a gesture.
Render carefully and double-check how your program reacts to touch events. It’s good to keep in mind the difference between event.touches versus event.targetTouches. The former is a list of all fingers on the screen. Meanwhile, the latter contains active touch points whose touchstart event was triggered within the same target element as the current target element.
Testing is a non-negotiable step in web design and development. Unfortunately, the process is a little more tedious than usual when implementing multi-touch features. Since most desktops do not have touch input, you’ll have to switch between your desktop and touch device while debugging your program.
What you need is a TouchTest application that records the details of touch events per area so that it’s easier to reference when going back to your code. Fortunately, there are numerous options online to help with this dilemma so there’s no excuse for not conducting rigorous testing.
Finally, you need to understand that multi-touch gestures don’t work on all devices. The completeness of touch features depends on the device model and operating system. Multi-touch support has become relatively broad throughout the years but some browsers still lack the necessary configurations for full implementation.
Apple products with iOS 4.x fully support multi-touch. On the other hand, Android and desktop browsers typically support basic touch events but may not execute complex gestures.
Outsource Your Web Development Needs to DevWerkz
Got an idea for a web app but don’t know how to pull it off? Work with a team of professional web developers and designers to see it come to life. At DevWerkz, we use our creative and technical know-how to help our clients succeed in the digital space. Contact us today.