Multi-touch computing comes of age

Touch computing isn’t new by any stretch of the imagination. It’s been around for more than three decades. If you think that’s a lot, the mouse took about three decades before it caught on with the average computer user. Today touch computing is commonplace on most notebooks (touchpads). Nevertheless when you talk of touch computing most folks think of touch screens and using your fingers to manipulate information directly on the display. This kind of interactivity made its mainstream debut with the iPhone and Microsoft Surface. With other initiatives aimed at bringing multi-touch to a wider range of devices, touch computing poised to become a popular way of interacting with a computer.

Touch computing as exemplified by the iPhone or Surface supports multi-touch, which means that the display can sense more than one finger (or in the case of Surface Computing more than one person’s fingers), and react accordingly. This allows you to use gestures like pinch or zoom to manipulate photos or Web pages. Surface recognizes devices such as digital cameras and mobile phones placed on it allowing you to perform tricks such as dragging photos from a camera to a phone with your fingers.

While these are nice tricks, how useful is multi-touch going to be? That depends on what you use it for. As far as availability goes, multi-touch support will be built into the next generation of trackpads and Windows 7 is going to support it. That should put the technology within reach of a lot more people than it already is over the course of the next few years. That said, multi-touch and touch computing are not a panacea for everything that you can do on a computer. A finger isn’t the most precise tool for anything more than simple tasks. For instance, you wouldn’t use your finger to sketch, you would use a pencil. Similarly, a stylus gives you better precision than your finger can. Another problem with using your finger is that it blocks whatever’s below it.

That’s not to say that touch computing isn’t useful. It does have applications in enabling more natural interaction with computing devices and implemented properly it can be very useful indeed. We’ll see more of touch computing as device makers use it to create more innovative and usable devices.

Recent developments include Microsoft Sidesight that attempts to overcome the limitations of touch screens on compact devices by capturing gestures that happen near a device using infrared proximity sensors to do just that. This lets you scroll through Web pages, say, on a phone without actually touching the screen. Then there’s SecondLight which builds on Surface to let you layer different views of an object by holding semi-transparent sheets over a Surface computer—you could view the blueprint of a building on the main display and the actual building on the sheet you are holding over the display.

Source


Subscribe to the Multi Touch Newsletter.
Get free updates on the latest in Multi Touch.
Your Name:
Your Email:


Related posts


blog comments powered by Disqus