Designing for Multi-Touch, Multi-User and Gesture-Based Systems

User experience (UX) principles can help you to effectively create software for multi-touch, multi-user, and gestural devices such as the Microsoft Surface. These new platforms bring new challenges, some that can be partially solved using current software design paradigms, but many that will require applying new ideas from the cutting edge of Interaction Design (IxD) and Human Computer Interaction (HCI). Challenges related to these newer “co-present” (that is, users who share the same physical space) situations differ from the challenges software designers have long dealt with when needing to support physically distributed users.

Designing a good gesture or multi-touch based system is first and foremost about designing a good system that happens to be gesture or multi-touch based. Following a general overview of gesture, multi-touch, and multi-user systems, I explain in this article how you can leverage traditional UX designing for these new types of systems. Using four well-known user experience principles as a starting point — affordances, engagement, feedback, and not making people think — I explore how they can be applied to these new types of systems.

Gestures, Multi-touch, and Multi-user Systems

We’ve all grown up on mouse- and keyboard-based computers that one person uses at a time, but times are changing and so are computers. Gesture-based computers replace mouseclicks with finger taps. Going even further, multi-touch systems recognize multiple fingers and objects at once; for example, Microsoft Surface can currently recognize and track 52 fingers, objects, or tag identifiers at one time. The possibility of tracking so many fingers at once opens up the system for multiple concurrent users all standing around the computer touching (using) it at once, which is what we call “multi-user.”

With all of these new input capabilities come new design challenges. Imagine the complexity of keeping track of even just two users standing over a Microsoft Surface, facing each other. One of them places a camera down on the Surface that, through the magic of barcode-like tags, is recognized by one of the five cameras in the Surface. Each of the two users are standing at different orientations to the screen, and even if the tag on the camera identifies who it belongs to, how can the system know how to orient the pictures from the camera on the screen? And while touching or tapping a picture might be intuitive, how are you as the designer supposed to let these novice users know that they can actually use multiple fingers to shrink or grow the size of the pictures? And this is just about the simplest case you’ll find in this brave new world of gesture, multi-touch, and multi-user systems.

Source 


Subscribe to the Multi Touch Newsletter.
Get free updates on the latest in Multi Touch.
Your Name:
Your Email:


Related posts


  • This single layer multitouch design does not require additional mask steps to insulate sensor crossover points and bridge X and Y sensor matrix lines. ...

blog comments powered by Disqus