Remove ads
Input and output device From Wikipedia, the free encyclopedia
A touchscreen (or touch screen) is a type of display that can detect touch input from a user. It consists of both an input device (a touch panel) and an output device (a visual display). The touch panel is typically layered on the top of the electronic visual display of a device. Touchscreens are commonly found in smartphones, tablets, laptops, and other electronic devices. The display is often an LCD, AMOLED or OLED display.
A user can give input or control the information processing system through simple or multi-touch gestures by touching the screen with a special stylus or one or more fingers.[1] Some touchscreens use ordinary or specially coated gloves to work, while others may only work using a special stylus or pen. The user can use the touchscreen to react to what is displayed and, if the software allows, to control how it is displayed; for example, zooming to increase the text size.
A touchscreen enables the user to interact directly with what is displayed, instead of using a mouse, touchpad, or other such devices (other than a stylus, which is optional for most modern touchscreens).[2]
Touchscreens are common in devices such as smartphones, handheld game consoles, and personal computers. They are common in point-of-sale (POS) systems, automated teller machines (ATMs), electronic voting machines, and automobile infotainment systems and controls. They can also be attached to computers or, as terminals, to networks. They play a prominent role in the design of digital appliances such as personal digital assistants (PDAs) and some e-readers. Touchscreens are important in educational settings such as classrooms or on college campuses.[3]
The popularity of smartphones, tablets, and many types of information appliances has driven the demand and acceptance of common touchscreens for portable and functional electronics. Touchscreens are found in the medical field, heavy industry, automated teller machines (ATMs), and kiosks such as museum displays or room automation, where keyboard and mouse systems do not allow a suitably intuitive, rapid, or accurate interaction by the user with the display's content.
Historically, the touchscreen sensor and its accompanying controller-based firmware have been made available by a wide array of after-market system integrators, and not by display, chip, or motherboard manufacturers. Display manufacturers and chip manufacturers have acknowledged the trend toward acceptance of touchscreens as a user interface component and have begun to integrate touchscreens into the fundamental design of their products.
This section may be in need of reorganization to comply with Wikipedia's layout guidelines. (December 2023) |
This section may require cleanup to meet Wikipedia's quality standards. The specific problem is: Unnecessary capitalization. (January 2024) |
One predecessor of the modern touchscreen includes stylus based systems.
1946 DIRECT LIGHT PEN - A patent was filed by Philco Company for a stylus designed for sports telecasting which, when placed against an intermediate cathode-ray tube (CRT) display would amplify and add to the original signal. Effectively, this was used for temporarily drawing arrows or circles onto a live television broadcast, as described in US 2487641A, Denk, William E, "Electronic pointer for television images", issued 1949-11-08.
1962 OPTICAL - The first version of a touchscreen which operated independently of the light produced from the screen was patented by AT&T Corporation US 3016421A, Harmon, Leon D, "Electrographic transmitter", issued 1962-01-09. This touchscreen utilized a matrix of collimated lights shining orthogonally across the touch surface. When a beam is interrupted by a stylus, the photodetectors which no longer are receiving a signal can be used to determine where the interruption is. Later iterations of matrix based touchscreens built upon this by adding more emitters and detectors to improve resolution, pulsing emitters to improve optical signal to noise ratio, and a nonorthogonal matrix to remove shadow readings when using multi-touch.
1963 INDIRECT LIGHT PEN - Later inventions built upon this system to free telewriting styli from their mechanical bindings. By transcribing what a user draws onto a computer, it could be saved for future use. See US 3089918A, Graham, Robert E, "Telewriting apparatus", issued 1963-05-14.
1965 CAPACITANCE AND RESISTANCE - The first finger driven touchscreen was developed by Eric Johnson, of the Royal Radar Establishment located in Malvern, England, who described his work on capacitive touchscreens in a short article published in 1965[8][9] and then more fully—with photographs and diagrams—in an article published in 1967.[10]
MID-60s ULTRASONIC CURTAIN - Another precursor of touchscreens, an ultrasonic-curtain-based pointing device in front of a terminal display, had been developed by a team around Rainer Mallebrein at Telefunken Konstanz for an air traffic control system.[11] In 1970, this evolved into a device named "Touchinput-Einrichtung" ("touch input facility") for the SIG 50 terminal utilizing a conductively coated glass screen in front of the display.[12][11] This was patented in 1971 and the patent was granted a couple of years later.[12][11] The same team had already invented and marketed the Rollkugel mouse RKS 100-86 for the SIG 100-86 a couple of years earlier.[12]
1968 CAPACITANCE - The application of touch technology for air traffic control was described in an article published in 1968.[13] Frank Beck and Bent Stumpe, engineers from CERN (European Organization for Nuclear Research), developed a transparent touchscreen in the early 1970s,[14] based on Stumpe's work at a television factory in the early 1960s. Then manufactured by CERN, and shortly after by industry partners,[15] it was put to use in 1973.[16]
1972 OPTICAL - A group at the University of Illinois filed for a patent on an optical touchscreen[17] that became a standard part of the Magnavox Plato IV Student Terminal and thousands were built for this purpose. These touchscreens had a crossed array of 16×16 infrared position sensors, each composed of an LED on one edge of the screen and a matched phototransistor on the other edge, all mounted in front of a monochrome plasma display panel. This arrangement could sense any fingertip-sized opaque object in close proximity to the screen.
1973 MULTI-TOUCH CAPACITANCE - In 1973, Beck and Stumpe published another article describing their capacitive touchscreen. This indicated that it was capable of multi-touch but this feature was purposely inhibited, presumably as this was not considered useful at the time ("A...variable...called BUT changes value from zero to five when a button is touched. The touching of other buttons would give other non-zero values of BUT but this is protected against by software" (Page 6, section 2.6).[18] "Actual contact between a finger and the capacitor is prevented by a thin sheet of plastic" (Page 3, section 2.3). At that time Projected capacitance had not yet been invented.
1977 RESISTIVE - An American company, Elographics – in partnership with Siemens – began work on developing a transparent implementation of an existing opaque touchpad technology, U.S. patent No. 3,911,215, October 7, 1975, which had been developed by Elographics' founder George Samuel Hurst.[19] The resulting resistive technology touch screen was first shown on the World's Fair at Knoxville in 1982.[20]
1982 MULTI-TOUCH CAMERA - Multi-touch technology began in 1982, when the University of Toronto's Input Research Group developed the first human-input multi-touch system, using a frosted-glass panel with a camera placed behind the glass.
1983 OPTICAL - An optical touchscreen was used on the HP-150 starting in 1983. The HP 150 was one of the world's earliest commercial touchscreen computers.[21] HP mounted their infrared transmitters and receivers around the bezel of a 9-inch Sony cathode ray tube (CRT).
1983 MULTI-TOUCH FORCE SENSING TOUCHSCREEN - Bob Boie of AT&T Bell Labs, used capacitance to track the mechanical changes in thickness of a soft, deformable overlay membrane when one or more physical objects interact with it;[22] the flexible surface being easily replaced, if damaged by these objects. The patent states "the tactile sensor arrangements may be utilized as a touch screen".
Many derivative sources[23][24][25] retrospectively describe Boie as making a major advancement with his touchscreen technology; but no evidence has been found that a rugged multi-touch capacitive touchscreen, that could sense through a rigid, protective overlay - the sort later required for a mobile phone, was ever developed or patented by Boie.[26] Many of these citations rely on anecdotal evidence from Bill Buxton of Bell Labs.[27] However, Bill Buxton did not have much luck getting his hands on this technology. As he states in the citation: "Our assumption (false, as it turned out) was that the Boie technology would become available to us in the near future. Around 1990 I took a group from Xerox to see this technology it [sic] since I felt that it would be appropriate for the user interface of our large document processors. This did not work out".
UP TO 1984 CAPACITANCE - Although, as cited earlier, Johnson is credited with developing the first finger operated capacitive and resistive touchscreens in 1965, these worked by directly touching wires across the front of the screen.[9] Stumpe and Beck developed a self-capacitance touchscreen in 1972, and a mutual capacitance touchscreen in 1977. Both these devices could only sense the finger by direct touch or through a thin insulating film.[28] This was 11 microns thick according to Stumpe's 1977 report.[29]
1984 TOUCHPAD - Fujitsu released a touch pad for the Micro 16 to accommodate the complexity of kanji characters, which were stored as tiled graphics.[30]
1986 GRAPHIC TABLET - A graphic touch tablet was released for the Sega AI Computer.[31][32]
EARLY 80s EVALUATION FOR AIRCRAFT - Touch-sensitive control-display units (CDUs) were evaluated for commercial aircraft flight decks in the early 1980s. Initial research showed that a touch interface would reduce pilot workload as the crew could then select waypoints, functions and actions, rather than be "head down" typing latitudes, longitudes, and waypoint codes on a keyboard. An effective integration of this technology was aimed at helping flight crews maintain a high level of situational awareness of all major aspects of the vehicle operations including the flight path, the functioning of various aircraft systems, and moment-to-moment human interactions.[33]
EARLY 80s EVALUATION FOR CARS - also, in the early 1980s, General Motors tasked its Delco Electronics division with a project aimed at replacing an automobile's non-essential functions (i.e. other than throttle, transmission, braking, and steering) from mechanical or electro-mechanical systems with solid state alternatives wherever possible. The finished device was dubbed the ECC for "Electronic Control Center", a digital computer and software control system hardwired to various peripheral sensors, servomechanisms, solenoids, antenna and a monochrome CRT touchscreen that functioned both as display and sole method of input.[34] The ECC replaced the traditional mechanical stereo, fan, heater and air conditioner controls and displays, and was capable of providing very detailed and specific information about the vehicle's cumulative and current operating status in real time. The ECC was standard equipment on the 1985–1989 Buick Riviera and later the 1988–1989 Buick Reatta, but was unpopular with consumers—partly due to the technophobia of some traditional Buick customers, but mostly because of costly technical problems suffered by the ECC's touchscreen which would render climate control or stereo operation impossible.[35]
1985 GRAPHIC TABLET - Sega released the Terebi Oekaki, also known as the Sega Graphic Board, for the SG-1000 video game console and SC-3000 home computer. It consisted of a plastic pen and a plastic board with a transparent window where pen presses are detected. It was used primarily with a drawing software application.[36]
1985 MULTI-TOUCH CAPACITANCE - The University of Toronto group, including Bill Buxton, developed a multi-touch tablet that used capacitance rather than bulky camera-based optical sensing systems (see History of multi-touch).
1985 USED FOR POINT OF SALE - The first commercially available graphical point-of-sale (POS) software was demonstrated on the 16-bit Atari 520ST color computer. It featured a color touchscreen widget-driven interface.[37] The ViewTouch[38] POS software was first shown by its developer, Gene Mosher, at the Atari Computer demonstration area of the Fall COMDEX expo in 1986.[39]
1987 CAPACITANCE TOUCH KEYS - Casio launched the Casio PB-1000 pocket computer with a touchscreen consisting of a 4×4 matrix, resulting in 16 touch areas in its small LCD graphic screen.
1988 SELECT ON "LIFT-OFF" - Touchscreens had a bad reputation of being imprecise until 1988. Most user-interface books would state that touchscreen selections were limited to targets larger than the average finger. At the time, selections were done in such a way that a target was selected as soon as the finger came over it, and the corresponding action was performed immediately. Errors were common, due to parallax or calibration problems, leading to user frustration. "Lift-off strategy"[40] was introduced by researchers at the University of Maryland Human–Computer Interaction Lab (HCIL). As users touch the screen, feedback is provided as to what will be selected: users can adjust the position of the finger, and the action takes place only when the finger is lifted off the screen. This allowed the selection of small targets, down to a single pixel on a 640×480 Video Graphics Array (VGA) screen (a standard of that time).
1988 WORLD EXPO - From April to October 1988, the city of Brisbane, Australia hosted Expo 88, whose theme was “leisure in the age of technology”. To support the event and provide information to expo visitors, Telecom Australia (now Telstra) erected 8 kiosks around the expo site with a total of 56 touch screen information consoles, being specially modified Sony Videotex Workstations. Each system was also equipped with a videodisc player, speakers, and a 20 MB hard drive. In order to keep up-to-date information during the event, the database of visitor information was updated and remotely transferred to the computer terminals each night. Using the touch screens, visitors were able to find information about the exposition’s rides, attractions, performances, facilities, and the surrounding areas. Visitors could also select between information displayed in English and Japanese; a reflection of Australia’s overseas tourist market in the 1980s. It is worth noting that Telecom’s Expo Info system was based on an earlier system employed at Expo 86 in Vancouver, Canada.[41]
1990 SINGLE AND MULTI-TOUCH GESTURES - Sears et al. (1990)[42] gave a review of academic research on single and multi-touch human–computer interaction of the time, describing gestures such as rotating knobs, adjusting sliders, and swiping the screen to activate a switch (or a U-shaped gesture for a toggle switch). The HCIL team developed and studied small touchscreen keyboards (including a study that showed users could type at 25 wpm on a touchscreen keyboard), aiding their introduction on mobile devices. They also designed and implemented multi-touch gestures such as selecting a range of a line, connecting objects, and a "tap-click" gesture to select while maintaining location with another finger.
1990 TOUCHSCREEN SLIDER AND TOGGLE SWITCHES - HCIL demonstrated a touchscreen slider,[43] which was later cited as prior art in the lock screen patent litigation between Apple and other touchscreen mobile phone vendors (in relation to U.S. patent 7,657,849).[44]
1991 INERTIAL CONTROL - From 1991 to 1992, the Sun Star7 prototype PDA implemented a touchscreen with inertial scrolling.[45]
1993 CAPACITANCE MOUSE / KEYPAD - Bob Boie of AT&T Bell Labs, patented a simple mouse or keypad that capacitively sensed just one finger through a thin insulator. [46] Although not claimed or even mentioned in the patent, this technology could potentially have been used as a capacitance touchscreen.
1993 FIRST RESISTIVE TOUCHSCREEN PHONE - IBM released the IBM Simon, which is the first touchscreen phone.
EARLY 90s ABANDONED GAME CONTROLLER - An early attempt at a handheld game console with touchscreen controls was Sega's intended successor to the Game Gear, though the device was ultimately shelved and never released due to the expensive cost of touchscreen technology in the early 1990s.
1994 FIRST WIRE BASED PROJECTED CAPACITANCE - Stumpe and Beck's touchscreens (1972/1977 - already cited), used opaque conductive copper tracks that obscured about 50% of the screen (80 micron track / 80 micron space). The advent of projected capacitance in 1984, however, with its improved sensing capability, indicated that most of these tracks could be eliminated. This proved to be so, and led to the invention of a wire based touchscreen in 1994, where one 25 micron diameter, insulation coated wire replaced about 30 of these 80 micron wide tracks, and could also accurately sense fingers through thick glass. Screen masking, caused by the copper, was reduced from 50% to less than 0.5%.
The use of fine wire meant that very large touchscreens, several meters wide, could be plotted onto a thin polyester support film with a simple x/y pen plotter,[47] eliminating the need for expensive and complicated sputter coating, laser ablation, screen printing or etching. The resulting, incredibly flexible, touchscreen film, less than 100 microns thick, could be attached by static or non-setting weak adhesive to one side of a sheet of glass, for sensing through that glass.[48] Early versions of this device were controlled by the PIC16C54 microchip.
1994 FIRST PUB GAME WITH TOUCHSCREEN - Appearing in pubs in 1994, JPM's Monopoly SWP (skill with prizes) was the first machine to use touch screen technology instead of buttons (see Quiz machine / History). It used a 14 inch version of this newly invented wire based projected capacitance touchscreen and had 64 sensing areas - the wiring pattern being similar to that shown in the lower diagram. The zig-zag pattern was introduced to minimize visual reflections and prevent Moire interference between the wires and the monitor line scans. About 600 of these were sold for this purpose, retailing at £50 apiece, which was very cheap for the time. Working through very thick glass made it ideal for operation in a "hostile" environment, such as a pub. Although reflected light from the copper wires was noticeable under certain lighting conditions, this problem was eliminated by using tinted glass. The reflection issue was later resolved by using finer (10 micron diameter), dark coated wires. Throughout the following decade JPM continued to use touchscreens for many other games such as "Cluedo" and "Who wants to be a Millionaire".[49]
1998 PROJECTED CAPACITANCE LICENSES - This technology was licensed four years later to Romag Glass Products - later to become Zytronic Displays, and Visual Planet in 2003 (see page 4).[50]
2004 MOBILE MULTI-TOUCH PROJECTED CAPACITANCE PATENT - Apple patents its multi-touch capacitive touchscreen for mobile devices.
2004 VIDEO GAMES WITH TOUCHSCREENS - Touchscreens were not popularly used for video games until the release of the Nintendo DS in 2004.[51]
2007 MOBILE PHONE WITH CAPACITANCE - The first mobile phone with a capacitive touchscreen was LG Prada, released in May 2007 (which was before the first iPhone released).[52] By 2009, touchscreen-enabled mobile phones were becoming trendy and quickly gaining popularity in both basic and advanced devices.[53][54] In Quarter-4 2009 for the first time, a majority of smartphones (i.e. not all mobile phones) shipped with touchscreens over non-touch.[55]
2013 RESISTIVE VERSUS PROJECTED CAPACITANCE SALES - In 2007, 93% of touchscreens shipped were resistive and only 4% were projected capacitance. In 2013, 3% of touchscreens shipped were resistive and 96% were projected capacitance (see page 5).[56]
2015 FORCE SENSING TOUCHSCREENS - Until recently,[when?] most consumer touchscreens could only sense one point of contact at a time, and few have had the capability to sense how hard one is touching. This has changed with the commercialization of multi-touch technology, and the Apple Watch being released with a force-sensitive display in April 2015.
2015 BISTATE PROJECTED CAPACITANCE - When used as a Projected Capacitance touchscreen, in mutual capacitance mode, diagonal wiring requires each I/O line to be capable of switching between two states (bistate), an output some of the time and an input at other times. I/Os are inputs most of the time, but, once every scan, one of the I/Os has to take its turn at being an output, the remaining input I/Os sensing any signals it generates. The I/O lines, therefore, may have to change from input to output, and vice versa, many times a second. This new design won an Electronics Weekly Elektra Award in 2017.[57]
2021 FIRST "INFINITELY WIDE" TOUCHSCREEN PATENT - With standard x/y array touchscreens, the length of the horizontal sensing elements increases as the width of the touchscreen increases. Eventually, a limit is hit where the resistance gets so great that the touchscreen can no longer function properly.
The patent describes how the use of diagonal elements ensures that the length of any element never exceeds 1.414 times the height of the touchscreen, no matter how wide it is.[58] This could be reduced to 1.15 times the height, if opposing diagonal elements intersect at 60 degrees instead of 90 degrees. The elongated touchscreen could be controlled by a single processor, or the distant ends could be controlled totally independently by different processors, linked by a synchronizing processor in the overlapping middle section. The number of unique intersections could be increased by allowing individual sensing elements to run in two opposing directions - as shown in the diagram.
This article may require cleanup to meet Wikipedia's quality standards. The specific problem is: Grammar & punctuation. (January 2024) |
There are a number of touchscreen technologies, with different methods of sensing touch.[42]
A resistive touchscreen panel is composed of several thin layers, the most important of which are two transparent electrically resistive layers facing each other with a thin gap between them. The top layer (the layer that is touched) has a coating on the underside surface; just beneath it is a similar resistive layer on top of its substrate. One layer has conductive connections along its sides, while the other along the top and bottom. A voltage is applied to one layer and sensed by the other. When an object, such as a fingertip or stylus tip, presses down onto the outer surface, the two layers touch to become connected at that point.[59] The panel then behaves as a pair of voltage dividers, one axis at a time. By rapidly switching between each layer, the position of pressure on the screen can be detected.
Resistive touch is used in restaurants, factories, and hospitals due to its high tolerance for liquids and contaminants. A major benefit of resistive-touch technology is its low cost. Additionally, they may be used with gloves on, or by using anything rigid as a finger substitute, as only sufficient pressure is necessary for the touch to be sensed. Disadvantages include the need to press down, and a risk of damage by sharp objects. Resistive touchscreens also suffer from poorer contrast, due to having additional reflections (i.e. glare) from the layers of material placed over the screen.[60] This type of touchscreen has been used by Nintendo in the DS family, the 3DS family, and the Wii U GamePad.[61]
Due to their simple structure, with very few inputs, resistive touchscreens are mainly used for single touch operation, although some two touch versions (often described as multi-touch) are available.[62][63] However, there are some true multi-touch resistive touchscreens available. These need many more inputs, and rely on x/y multiplexing to keep the I/O count down.
One example of a true multi-touch resistive touchscreen[64] can detect 10 fingers at the same time. This has 80 I/O connections. These are possibly split 34 x inputs / 46 y outputs, forming a standard 3:4 aspect ratio touchscreen with 1564 x/y intersecting touch sensing nodes.
Tri-state multiplexing could have been used instead of x/y multiplexing. This would have reduced the I/O count from 80 to 60 while creating 1770 unique touch sensing nodes, with no need for a bezel, and with all inputs coming from just one edge.[65]
Surface acoustic wave (SAW) technology uses ultrasonic waves that pass over the touchscreen panel. When the panel is touched, a portion of the wave is absorbed. The change in ultrasonic waves is processed by the controller to determine the position of the touch event. Surface acoustic wave touchscreen panels can be damaged by outside elements. Contaminants on the surface can also interfere with the functionality of the touchscreen.
SAW devices have a wide range of applications, including delay lines, filters, correlators and DC to DC converters.
A capacitive touchscreen panel consists of an insulator, such as glass, coated with a transparent conductor, such as indium tin oxide (ITO).[66] As the human body is also an electrical conductor, touching the surface of the screen results in a distortion of the screen's electrostatic field, measurable as a change in capacitance. Different technologies may be used to determine the location of the touch. The location is then sent to the controller for processing. Some touchscreens use silver instead of ITO, as ITO causes several environmental problems due to the use of indium.[67][68][69][70] The controller is typically a complementary metal–oxide–semiconductor (CMOS) application-specific integrated circuit (ASIC) chip, which in turn usually sends the signals to a CMOS digital signal processor (DSP) for processing.[71][72]
Unlike a resistive touchscreen, some capacitive touchscreens cannot be used to detect a finger through electrically insulating material, such as gloves. This disadvantage especially affects usability in consumer electronics, such as touch tablet PCs and capacitive smartphones in cold weather when people may be wearing gloves. It can be overcome with a special capacitive stylus, or a special-application glove with an embroidered patch of conductive thread allowing electrical contact with the user's fingertip.
A low-quality switching-mode power supply unit with an accordingly unstable, noisy voltage may temporarily interfere with the precision, accuracy and sensitivity of capacitive touch screens.[73][74][75]
Some capacitive display manufacturers continue to develop thinner and more accurate touchscreens. Those for mobile devices are now being produced with 'in-cell' technology, such as in Samsung's Super AMOLED screens, that eliminates a layer by building the capacitors inside the display itself. This type of touchscreen reduces the visible distance between the user's finger and what the user is touching on the screen, reducing the thickness and weight of the display, which is desirable in smartphones.
A simple parallel-plate capacitor has two conductors separated by a dielectric layer. Most of the energy in this system is concentrated directly between the plates. Some of the energy spills over into the area outside the plates, and the electric field lines associated with this effect are called fringing fields. Part of the challenge of making a practical capacitive sensor is to design a set of printed circuit traces which direct fringing fields into an active sensing area accessible to a user. A parallel-plate capacitor is not a good choice for such a sensor pattern. Placing a finger near fringing electric fields adds conductive surface area to the capacitive system. The additional charge storage capacity added by the finger is known as finger capacitance, or CF. The capacitance of the sensor without a finger present is known as parasitic capacitance, or CP.
In this basic technology, only one side of the insulator is coated with a conductive layer. A small voltage is applied to the layer, resulting in a uniform electrostatic field. When a conductor, such as a human finger, touches the uncoated surface, a capacitor is dynamically formed. The sensor's controller can determine the location of the touch indirectly from the change in the capacitance as measured from the four corners of the panel. As it has no moving parts, it is moderately durable but has limited resolution, is prone to false signals from parasitic capacitive coupling, and needs calibration during manufacture. It is therefore most often used in simple applications such as industrial controls and kiosks.[76]
Although some standard capacitance detection methods are projective, in the sense that they can be used to detect a finger through a non-conductive surface, they are very sensitive to fluctuations in temperature, which expand or contract the sensing plates, causing fluctuations in the capacitance of these plates.[77] These fluctuations result in a lot of background noise, so a strong finger signal is required for accurate detection. This limits applications to those where the finger directly touches the sensing element or is sensed through a relatively thin non-conductive surface.
Projected capacitive touch (PCT; also PCAP) technology is a variant of capacitive touch technology but where sensitivity to touch, accuracy, resolution and speed of touch have been greatly improved by the use of a simple form of artificial intelligence. This intelligent processing enables finger sensing to be projected, accurately and reliably, through very thick glass and even double glazing.[78]
Projected capacitance is a method for accurately detecting and tracking a particular variable, or group of variables (such as finger(s)), by: a) using a simple form of artificial intelligence to develop a profile of the capacitance changing effects expected for that variable, b) specifically looking for such changes, and c) eliminating measured capacitance changes that do not match this profile, attributable to global variables (such as temperature/humidity, dirt build-up, electrical noise), and local variables (such as rain drops, partial shade and hands/elbows). Capacitance sensors may be discrete - possibly (but not necessarily) in a regular array, or they may be multiplexed.
Assumptions.
In practice, various assumptions are made, such as: - a) fingers will not be touching the screen at "power-up", b) a finger will not be on the same spot for more than a fixed period of time, and c) fingers will not be touching everywhere at the same time.
a) If a finger IS touching the screen at "power-up", then, as soon as it is removed a large "anti-touch" capacitance change will be detected. This signals to the processor to reset the touch thresholds and store new "no touch" values for each input.
b) Long-term drift compensation is used to gradually raise or lower these thresholds (trending eventually to "no-touch"). This compensates for global changes in temperature and humidity. It also eliminates the possibility of any position appearing to be touched for too long, due to some "non-finger" event. This might be caused, for example, by a wet leaf landing on, and sticking to the screen.
c) When a decision is to be made about the validity of one or more touches, then, assumption c) means that the average value, of changes measured for some of the inputs with the smallest change, can be used to "offset" the touch thresholds of the inputs in contention. This minimizes the influence of hands and arms.
By these and other means, the processor is constantly fine tuning the touch thresholds, and tweaking the touch sensitivity of each input. This enables very small changes, caused only by fingers, to be accurately detected through thick overlays, or several centimeters of air.
When a conductive object, such as a finger, comes into contact with a PCT panel, it distorts the local electrostatic field at that point. This is measurable as a change in capacitance. If a finger bridges the gap between two of the "tracks", the charge field is further interrupted and detected by the controller. The capacitance can be changed and measured at every individual point on the grid. This system is able to accurately track touches.[79]
Due to the top layer of a PCT being glass, it is sturdier than less-expensive resistive touch technology. Unlike traditional capacitive touch technology, it is possible for a PCT system to sense a passive stylus or gloved fingers.
Moisture on the surface of the panel, high humidity, or collected dust are not a problem, especially with 'fine wire' based touchscreens due to the fact that wire based touchscreens have a very low 'parasitic' capacitance, and there is a greater distance between neighboring conductors. Projected capacitance has "long term drift compensation" built in. This minimizes the effects of slowly changing environmental factors, such as the build-up of dirt and effects caused by changes in the weather.[78] Drops of rain have little effect, but flowing water, and especially flowing sea water (due to its electrical conductivity), can cause short term issues.
A high frequency (RF) signal, possibly from 100 kHz to 1 MHz, is imposed on one track at a time, and appropriate capacitance measurements are taken ( as described later in this article).[80] This process is repeated until all the tracks have been sampled.
Conductive tracks are often transparent, one example being Indium tin oxide (ITO), a transparent electrical conductor, but these conductive tracks can be made of very fine, non-transparent metal mesh[81] or individual fine wires.[47]
Layout can vary depending on whether a single finger is to be detected or multiple fingers.
In order to detect many fingers at the same time, some modern PCT touch screens are composed of thousands of discrete keys, each key being linked individually to the edge of the touch screen. This is enabled by etching an electrode grid pattern in a transparent conductive coating on one side of a sheet of glass or plastic.
To reduce the number of input tracks, most PCT touch screens use multiplexing. This enables, for example, 100 (n) discrete key inputs to be reduced to 20 when using x/y multiplexing, or 15 if using bistate multiplexing or tri-state multiplexing.
Capacitance multiplexing requires a grid of intersecting, but electrically isolated conductive tracks. This can be achieved in many different ways. One way is by creating parallel conductive tracks on one side of a plastic film, and similar parallel tracks on the other side, orientated at 90 degrees to the first side.[82][83]
Another way is to etch tracks on separate sheets of glass, and join these sheets, with tracks at right angles to each other, face to face using a thin non-conductive, adhesive interlayer.[84]
A simple alternative is to embed an x/y or diagonal grid of very fine, insulation coated conductive wires in a thin polyester film. This film can then be attached to one side of a sheet of glass, for operation through the glass.[47]
Touch resolution and the number of fingers that can be detected simultaneously is determined by the number of cross-over points (x * y). If x + y = n, then the maximum possible number of cross-overs is (n/2)2.
An electrical signal, imposed on one electrical conductor, can be capacitively "sensed" by another electrical conductor that is in very close proximity, but electrically isolated—a feature that is exploited in mutual capacitance touchscreens. In a mutual capacitive sensor array, the "mutual" crossing of one electrical conductor with another electrical conductor, but with no direct electrical contact, forms a capacitor (see touchscreen#Construction).
High frequency voltage pulses are applied to these conductors, one at a time. These pulses capacitively couple to every conductor that intersects it.
Bringing a finger or conductive stylus close to the surface of the sensor changes the local electrostatic field, which in turn reduces the capacitance between these intersecting conductors. Any significant change in the strength of the signal sensed is used to determine if a finger is present or not at an intersection.[85]
The capacitance change at every intersection on the grid can be measured to accurately determine one or more touch locations.
Mutual capacitance allows multi-touch operation where multiple fingers, palms or styli can be accurately tracked at the same time.The greater the number of intersections, the better the touch resolution and the more independent fingers that can be detected.[86] [87] This indicates a distinct advantage of diagonal wiring over standard x/y wiring, since diagonal wiring creates nearly twice the number of intersections.
A 30 i/o, 16×14 x/y array, for example, would have 224 of these intersections / capacitors, and a 30 i/o diagonal lattice array could have 435 intersections.
Each trace of an x/y mutual capacitance array only has one function, it is either an input or an output. The horizontal traces may be transmitters while the vertical traces are sensors, or vice versa.
The traces in a diagonal mutual capacitance array, however, have to continuously change their functionality, "on the fly", by a process called bi-state multiplexing or Tri-state multiplexing. Some of the time a trace will be an output, at another time it will be an input or "grounded". A "look-up" table can be used to simplify this process. By slightly distorting the conductors in a "n" I/O diagonal matrix, the equivalent of a (n-1) by (n/2) array is formed. After address decoding, this can then be processed as a standard x/y array.
Self-capacitance sensors can have the same X/Y or diagonal grid layout[65] as mutual capacitance sensors, but, with self-capacitance all the traces usually operate independently, with no interaction between different traces. Along with several other methods, the extra capacitive load of a finger on a trace electrode may be measured by a current meter, or by the change in frequency of an RC oscillator.
Traces are sensed, one after the other until all the traces have been sensed. A finger may be detected anywhere along the whole length of a trace (even "off-screen"), but there is no indication where the finger is along that trace. If, however, a finger is also detected along another intersecting trace, then it is assumed that the finger position is at the intersection of the two traces. This allows for the speedy and accurate detection of a single finger.
There is, however, ambiguity if more than one finger is to be detected.[88] Two fingers may have four possible detection positions, only two of which are true, the other two being "ghosts." However, by selectively de-sensitizing any touch-points in contention, conflicting results are easily resolved.[89] This enables self-capacitance to be used for two touch operation.
Although mutual capacitance is simpler for multi-touch, multi-touch can be achieved using self-capacitance.
If the trace being sensed is intersected by another trace that has a "desensitizing" signal on it, then that intersection is insensitive to touch. By imposing such a "desensitizing" signal on all the intersecting traces, except one, along the trace being sensed, then just a short length of that trace will be sensitive to touch.[89] By selecting a sequence of these sensing sections along the trace, it is possible to determine the accurate position of multiple fingers along the one trace. This process can then be repeated for all the other traces until the whole screen has been scanned.
Self-capacitive touch screen layers are used on mobile phones such as the Sony Xperia Sola,[90] the Samsung Galaxy S4, Galaxy Note 3, Galaxy S5, and Galaxy Alpha.
Self-capacitance is far more sensitive than mutual capacitance and is mainly used for single touch, simple gesturing and proximity sensing where the finger does not even have to touch the glass surface. Mutual capacitance is mainly used for multitouch applications.[91] Many touchscreen manufacturers use both self and mutual capacitance technologies in the same product, thereby combining their individual benefits.[92]
When using a 16 x 14 X/Y array to determine the position of a single finger by self-capacitance, 30 (i.e. 16 + 14) capacitance measurements are required. The finger being determined to be at the intersection of the strongest of the 16 x measurements and the strongest of the 14 y measurements. However, when using mutual capacitance, every intersection may have to be measured, making a total of 224 (i.e. 16 x 14) capacitance measurements. In this example, therefore, mutual capacitance requires nearly 7 times as many measurements as self-capacitance to detect the position of a finger.
Many applications, such as selecting items from a list or menu, require just one finger, and self-capacitance is eminently suitable for such applications, due to the relatively low processing load, simpler processing method, the ability to sense through thick dielectric materials or air, and the possibility of reducing the number of inputs required, through repeat track layouts.[93]
For many other applications, however, such as for expanding / contracting items on the screen and for other gestures, two or more fingers need to be tracked.
Two fingers can be detected and tracked accurately using self-capacitance, but this does involve a few extra calculations, and 4 extra capacitance measurements to eliminate the 2 "ghost" positions. One method is to undertake a full self-capacitance scan, to detect the 4 ambiguous finger positions, then use just 4 targeted mutual capacitance measurements to discover which two of the 4 positions are valid and which 2 are not. This gives a total of 34 measurements—still far less than the 224 required when using mutual capacitance alone.
With 3 fingers, 9 disambiguations are required; with 4 fingers, 16 disambiguations etc.
With more fingers, it may be decided that the process of disambiguation is too unwieldy. If sufficient processing power is available, the switch can then be made to full mutual capacitance scanning.[89]
Capacitive touchscreens do not necessarily need to be operated by a finger, but until recently the special styli required could be quite expensive to purchase. The cost of this technology has fallen greatly in recent years and capacitive styli are now widely available for a nominal charge, and often given away free with mobile accessories. These consist of an electrically conductive shaft with a soft conductive rubber tip, thereby resistively connecting the fingers to the tip of the stylus.
An infrared touchscreen uses an array of X-Y infrared LED and photodetector pairs around the edges of the screen to detect a disruption in the pattern of LED beams. These LED beams cross each other in vertical and horizontal patterns. This helps the sensors pick up the exact location of the touch. A major benefit of such a system is that it can detect essentially any opaque object including a finger, gloved finger, stylus or pen. It is generally used in outdoor applications and POS systems that cannot rely on a conductor (such as a bare finger) to activate the touchscreen. Unlike capacitive touchscreens, infrared touchscreens do not require any patterning on the glass which increases durability and optical clarity of the overall system. Infrared touchscreens are sensitive to dirt and dust that can interfere with the infrared beams, and suffer from parallax in curved surfaces and accidental press when the user hovers a finger over the screen while searching for the item to be selected.
A translucent acrylic sheet is used as a rear-projection screen to display information. The edges of the acrylic sheet are illuminated by infrared LEDs, and infrared cameras are focused on the back of the sheet. Objects placed on the sheet are detectable by the cameras. When the sheet is touched by the user, frustrated total internal reflection results in leakage of infrared light which peaks at the points of maximum pressure, indicating the user's touch location. Microsoft's PixelSense tablets use this technology.
Optical touchscreens are a relatively modern development in touchscreen technology, in which two or more image sensors (such as CMOS sensors) are placed around the edges (mostly the corners) of the screen. Infrared backlights are placed in the sensor's field of view on the opposite side of the screen. A touch blocks some lights from the sensors, and the location and size of the touching object can be calculated (see visual hull). This technology is growing in popularity due to its scalability, versatility, and affordability for larger touchscreens.
Introduced in 2002 by 3M, this system detects a touch by using sensors to measure the piezoelectricity in the glass. Complex algorithms interpret this information and provide the actual location of the touch.[94] The technology is unaffected by dust and other outside elements, including scratches. Since there is no need for additional elements on screen, it also claims to provide excellent optical clarity. Any object can be used to generate touch events, including gloved fingers. A downside is that after the initial touch, the system cannot detect a motionless finger. However, for the same reason, resting objects do not disrupt touch recognition.
The key to this technology is that a touch at any one position on the surface generates a sound wave in the substrate which then produces a unique combined signal as measured by three or more tiny transducers attached to the edges of the touchscreen. The digitized signal is compared to a list corresponding to every position on the surface, determining the touch location. A moving touch is tracked by rapid repetition of this process. Extraneous and ambient sounds are ignored since they do not match any stored sound profile. The technology differs from other sound-based technologies by using a simple look-up method rather than expensive signal-processing hardware. As with the dispersive signal technology system, a motionless finger cannot be detected after the initial touch. However, for the same reason, the touch recognition is not disrupted by any resting objects. The technology was created by SoundTouch Ltd in the early 2000s, as described by the patent family EP1852772, and introduced to the market by Tyco International's Elo division in 2006 as Acoustic Pulse Recognition.[95] The touchscreen used by Elo is made of ordinary glass, giving good durability and optical clarity. The technology usually retains accuracy with scratches and dust on the screen. The technology is also well suited to displays that are physically larger.
There are several principal ways to build a touchscreen. The key goals are to recognize one or more fingers touching a display, to interpret the command that this represents, and to communicate the command to the appropriate application.
Multi-touch projected capacitance screens
A very simple, low cost way to make a multi-touch projected capacitance touchscreen, is to sandwich an x/y or diagonal matrix of fine, insulation coated copper or tungsten wires between two layers of clear polyester film. This creates an array of proximity sensing micro-capacitors. One of these micro-capacitors every 10 to 15 mm is probably sufficient spacing if fingers are relatively widely spaced apart, but very high discrimination multi-touch may need a micro-capacitor every 5 or 6 mm. A similar system can be used for ultra-high resolution sensing, such as fingerprint sensing. Fingerprint sensors require a micro-capacitor spacing of about 44 to 50 microns.[96]
The touchscreens can be manufactured at home, using readily available tools and materials, or it can be done industrially.
First, a "continuous-trace" wiring pattern is generated using a simple CAD system.
The wire is threaded through a plotter pen and plotted directly, as one continuous wire, onto a thin sheet of adhesive coated, clear polyester film (such as "window film"), using a standard, low cost x/y pen plotter.[47] After plotting, the single wire is gently cut into individual sections with a sharp scalpel, taking care not to damage the film.
A second identical polyester film is laminated over the first film. The resulting touchscreen film is then trimmed to shape, and a connector is retro-fitted.
The end product is extremely flexible, being about 75 microns thick (about the thickness of a human hair). It can even be creased without loss of functionality.
The film can be mounted on, or behind non-conducting (or slightly conducting) surfaces. Usually, it is mounted behind a sheet of glass up to 12 mm thick (or more), for sensing through the glass.
This method is suitable for a wide range of touchscreen sizes from very small to several meters wide - or even wider, if using a diagonally wired matrix.[65][58]
The end product is environmentally friendly as it uses recyclable polyester, and minute quantities of copper wire. The film could even have a second life as another product, such as drawing film, or wrapping film. Unlike some other touchscreen technologies, no complex processes or rare materials are used.
For non-touchscreen applications, other plastics (e.g. vinyl or ABS) may be used. The film can be blow molded or heat formed into complex three dimensional shapes, such as bottles, globes or car dashboards. Alternatively, the wires can be embedded in thick plastic such as fiber glass or carbon fiber body panels.
Single touch resistive touchscreens
In the resistive approach, which used to be the most popular technique, there are typically four layers:
When a user touches the surface, the system records the change in the electric current that flows through the display.
Dispersive signal
Dispersive signal technology measures the piezoelectric effect—the voltage generated when mechanical force is applied to a material that occurs chemically when a strengthened glass substrate is touched.
Infrared
There are two infrared-based approaches. In one, an array of sensors detects a finger touching or almost touching the display, thereby interrupting infrared light beams projected over the screen. In the other, bottom-mounted infrared cameras record heat from screen touches.
In each case, the system determines the intended command based on the controls showing on the screen at the time and the location of the touch.
The development of multi-touch screens facilitated the tracking of more than one finger on the screen; thus, operations that require more than one finger are possible. These devices also allow multiple users to interact with the touchscreen simultaneously.
With the growing use of touchscreens, the cost of touchscreen technology is routinely absorbed into the products that incorporate it and is nearly eliminated. Touchscreen technology has demonstrated reliability and is found in airplanes, automobiles, gaming consoles, machine control systems, appliances, and handheld display devices including cellphones; the touchscreen market for mobile devices was projected to produce US$5 billion by 2009.[97][needs update]
The ability to accurately point on the screen itself is also advancing with the emerging graphics tablet-screen hybrids. Polyvinylidene fluoride (PVDF) plays a major role in this innovation due its high piezoelectric properties, which allow the tablet to sense pressure, making such things as digital painting behave more like paper and pencil.[98]
TapSense, announced in October 2011, allows touchscreens to distinguish what part of the hand was used for input, such as the fingertip, knuckle and fingernail. This could be used in a variety of ways, for example, to copy and paste, to capitalize letters, to activate different drawing modes, etc.[99][100]
For touchscreens to be effective input devices, users must be able to accurately select targets and avoid accidental selection of adjacent targets. The design of touchscreen interfaces should reflect technical capabilities of the system, ergonomics, cognitive psychology and human physiology.
Guidelines for touchscreen designs were first developed in the 2000s, based on early research and actual use of older systems, typically using infrared grids—which were highly dependent on the size of the user's fingers. These guidelines are less relevant for the bulk of modern touch devices which use capacitive or resistive touch technology.[101][102]
From the mid-2000s, makers of operating systems for smartphones have promulgated standards, but these vary between manufacturers, and allow for significant variation in size based on technology changes, so are unsuitable from a human factors perspective.[103][104][105]
Much more important is the accuracy humans have in selecting targets with their finger or a pen stylus. The accuracy of user selection varies by position on the screen: users are most accurate at the center, less so at the left and right edges, and least accurate at the top edge and especially the bottom edge. The R95 accuracy (required radius for 95% target accuracy) varies from 7 mm (0.28 in) in the center to 12 mm (0.47 in) in the lower corners.[106][107][108][109][110] Users are subconsciously aware of this, and take more time to select targets which are smaller or at the edges or corners of the touchscreen.[111]
This user inaccuracy is a result of parallax, visual acuity and the speed of the feedback loop between the eyes and fingers. The precision of the human finger alone is much, much higher than this, so when assistive technologies are provided—such as on-screen magnifiers—users can move their finger (once in contact with the screen) with precision as small as 0.1 mm (0.004 in).[112][dubious – discuss]
Users of handheld and portable touchscreen devices hold them in a variety of ways, and routinely change their method of holding and selection to suit the position and type of input. There are four basic types of handheld interaction:
Use rates vary widely. While two-thumb tapping is encountered rarely (1–3%) for many general interactions, it is used for 41% of typing interaction.[113]
In addition, devices are often placed on surfaces (desks or tables) and tablets especially are used in stands. The user may point, select or gesture in these cases with their finger or thumb, and vary use of these methods.[114]
Touchscreens are often used with haptic response systems. A common example of this technology is the vibratory feedback provided when a button on the touchscreen is tapped. Haptics are used to improve the user's experience with touchscreens by providing simulated tactile feedback, and can be designed to react immediately, partly countering on-screen response latency. Research from the University of Glasgow (Brewster, Chohan, and Brown, 2007; and more recently Hogan) demonstrates that touchscreen users reduce input errors (by 20%), increase input speed (by 20%), and lower their cognitive load (by 40%) when touchscreens are combined with haptics or tactile feedback. On top of this, a study conducted in 2013 by Boston College explored the effects that touchscreens haptic stimulation had on triggering psychological ownership of a product. Their research concluded that a touchscreens ability to incorporate high amounts of haptic involvement resulted in customers feeling more endowment to the products they were designing or buying. The study also reported that consumers using a touchscreen were willing to accept a higher price point for the items they were purchasing.[115]
Touchscreen technology has become integrated into many aspects of customer service industry in the 21st century.[116] The restaurant industry is a good example of touchscreen implementation into this domain. Chain restaurants such as Taco Bell,[117] Panera Bread, and McDonald's offer touchscreens as an option when customers are ordering items off the menu.[118] While the addition of touchscreens is a development for this industry, customers may choose to bypass the touchscreen and order from a traditional cashier.[117] To take this a step further, a restaurant in Bangalore has attempted to completely automate the ordering process. Customers sit down to a table embedded with touchscreens and order off an extensive menu. Once the order is placed it is sent electronically to the kitchen.[119] These types of touchscreens fit under the Point of Sale (POS) systems mentioned in the lead section.
Extended use of gestural interfaces without the ability of the user to rest their arm is referred to as "gorilla arm".[120] It can result in fatigue, and even repetitive stress injury when routinely used in a work setting. Certain early pen-based interfaces required the operator to work in this position for much of the workday.[121] Allowing the user to rest their hand or arm on the input device or a frame around it is a solution for this in many contexts. This phenomenon is often cited as an example of movements to be minimized by proper ergonomic design.
Unsupported touchscreens are still fairly common in applications such as ATMs and data kiosks, but are not an issue as the typical user only engages for brief and widely spaced periods.[122]
Touchscreens can suffer from the problem of fingerprints on the display. This can be mitigated by the use of materials with optical coatings designed to reduce the visible effects of fingerprint oils. Most modern smartphones have oleophobic coatings, which lessen the amount of oil residue. Another option is to install a matte-finish anti-glare screen protector, which creates a slightly roughened surface that does not easily retain smudges.
Capacitive touchscreens rarely work when the user wears gloves. The thickness of the glove and the material they are made of play a significant role on that and the ability of a touchscreen to pick up a touch.
Some devices have a mode which increases the sensitivity of the touchscreen. This allows the touchscreen to be used more reliably with gloves, but can also result in unreliable and phantom inputs. However, thin gloves such as medical gloves are thin enough for users to wear when using touchscreens; mostly applicable to medical technology and machines.
Seamless Wikipedia browsing. On steroids.
Every time you click a link to Wikipedia, Wiktionary or Wikiquote in your browser's search results, it will show the modern Wikiwand interface.
Wikiwand extension is a five stars, simple, with minimum permission required to keep your browsing private, safe and transparent.