In computing, multi-touch is technology that enables a surface (a touchpad or touchscreen) to recognize the presence of more than one point of contact with the surface at the same time. The origins of multitouch began at CERN,[1] MIT, University of Toronto, Carnegie Mellon University and Bell Labs in the 1970s.[2] CERN started using multi-touch screens as early as 1976 for the controls of the Super Proton Synchrotron.[3][4] Capacitive multi-touch displays were popularized by Apple's iPhone in 2007.[5][6] Multi-touch may be used to implement additional functionality, such as pinch to zoom or to activate certain subroutines attached to predefined gestures using gesture recognition.
Several uses of the term multi-touch resulted from the quick developments in this field, and many companies using the term to market older technology which is called gesture-enhanced single-touch or several other terms by other companies and researchers. Several other similar or related terms attempt to differentiate between whether a device can exactly determine or only approximate the location of different points of contact to further differentiate between the various technological capabilities, but they are often used as synonyms in marketing.
Multi-touch is commonly implemented using capacitive sensing technology in mobile devices and smart devices. A capacitive touchscreen typically consists of a capacitive touch sensor, application-specific integrated circuit (ASIC) controller and digital signal processor (DSP) fabricated from CMOS (complementary metal–oxide–semiconductor) technology. A more recent alternative approach is optical touch technology, based on image sensor technology.
stumpe77
was invoked but never defined (see the help page).