Screen reading
Reading a text on a computer or smart device screen From Wikipedia, the free encyclopedia
Screen reading is the act of reading a text on a computer screen, smartphone, e-book reader,
Information about Screen reading
A screen reader is an assistive technology software program that enables people with visual impairments to interact with digital content. It works by interpreting the text and other elements (like images, buttons, and menus) displayed on a computer screen and conveying this information to the user through non-visual means, primarily speech synthesis (reading the text aloud) or a Braille display (translating the text into Braille characters).
Here's a breakdown of how screen reading works and related information:
How Screen Readers Work:
Code Interpretation: Screen readers analyze the underlying code of websites, applications, and documents (like HTML, XML, and operating system APIs). Content Extraction and Identification: They identify different types of content, such as text, headings, links, lists, form fields, and images. Text-to-Speech (TTS): The extracted text is then converted into synthesized speech, allowing the user to hear the on-screen content. Users can often customize the voice, speaking rate, pitch, and volume. Braille Output: For users who read Braille, screen readers can send the text information to a refreshable Braille display, which dynamically changes the Braille characters. Navigation: Users navigate through the digital content using keyboard commands, touch screen gestures (on mobile devices), or Braille display input. Screen readers provide feedback about the current location and the type of element in focus. They allow users to move by character, word, line, paragraph, heading, link, form control, and more. Alternative Text (Alt Text) for Images: Screen readers rely on alternative text provided for images to describe their content to the user. Well-written alt text is crucial for accessibility.
Semantic Structure: Screen readers utilize the semantic structure of web pages and documents (e.g., using proper heading tags like
, list tags like , ) to provide better context and navigation options. Key Features of Screen Readers: Reading Modes: Different modes to read by character, word, line, sentence, or paragraph. Navigation Tools: Shortcuts to jump to specific elements like headings, links, and form controls. Information about Formatting: Some screen readers can announce text formatting such as bold, italics, and font changes. Spelling: The ability to spell out words character by character. Finding Text: Features to search for specific words or phrases on the screen. Customization: Options to adjust speech settings, keyboard layouts, and other preferences. Scripting: Advanced users can often write scripts to customize how the screen reader interacts with specific applications. Language Support: Most screen readers support multiple languages, and some can automatically switch languages based on the content. Optical Character Recognition (OCR): Some advanced screen readers have OCR capabilities to recognize text in images or inaccessible PDFs. Popular Screen Readers: For Windows: JAWS (Job Access With Speech): A powerful and widely used commercial screen reader. NVDA (NonVisual Desktop Access): A free and open-source screen reader. Narrator: A basic screen reader built into Windows. Dolphin ScreenReader: A commercial screen reader with natural voices and Braille support. For macOS and iOS: VoiceOver: A free screen reader integrated into Apple devices. For Android: TalkBack: A free screen reader built into Android. For Linux: Orca: A free and open-source screen reader for the GNOME desktop environment. BRLTTY: A screen reader specifically for Braille displays on Linux/Unix consoles. Emacspeak: A speech interface that turns Emacs into an audio desktop. For ChromeOS: ChromeVox: A built-in screen reader for Chromebooks. Web-based: WebAnywhere: A free, web-based screen reader that doesn't require installation. Screen Reading and Accessibility: Screen readers are fundamental to web accessibility. For websites and applications to be usable by individuals with visual impairments, they need to be designed and developed with screen reader compatibility in mind. This includes: Semantic HTML: Using HTML tags correctly to convey meaning and structure. Descriptive Alt Text for Images: Providing meaningful text descriptions for all images. Proper Heading Structures: Using headings ( to ) logically to organize content. Accessible Forms: Labeling form fields correctly and providing clear instructions. Keyboard Navigation: Ensuring all interactive elements can be accessed and operated using a keyboard. Clear Link Text: Providing descriptive text for links that indicates their destination. Accessible Tables: Structuring tables correctly with header rows and columns. Avoiding Automatic Media and Navigation: Giving users control over media playback and navigation. ARIA (Accessible Rich Internet Applications): Using ARIA attributes to provide additional semantic information to screen readers about dynamic content and interactive elements. Understanding how screen readers work is crucial for developers and content creators to build inclusive digital experiences. By following accessibility guidelines, they can ensure that their websites and applications are usable by everyone, including individuals who rely on screen readers. Discovery Louis Émile Javal, a French ophthalmologist and founder of an ophthalmology laboratory in Paris is credited with the introduction of the term saccades into eye movement research. Javal discovered that while reading, one's eyes tend to jump across the text in saccades, and stop intermittently along each line in fixations.[1] Because of the lack of technology at the time, naked-eye observations were used to observe eye movement, until later in the late 19th and mid-20th century eye-tracking experiments were conducted in an attempt to discover a pattern regarding eye fixations while reading.[1] Research F-Pattern In a 1997 study conducted by Jakob Nielsen, a web usability expert who co-founded usability consulting company Nielsen Norman Group with Donald Norman, it was discovered that generally people read 25% slower on a computer screen in comparison with a printed page.[2] The researchers state that this is only true for when reading on an older type computer screen with a low-scanrate. In an additional study done in 2006, Nielsen also discovered that people read Web pages in an F-shaped pattern that consists of two horizontal stripes followed by a vertical stripe.[3] He had 232 participants fitted with eye-tracking cameras to trace their eye movements as they read online texts and webpages. The findings showed that people do not read the text on webpages word-by-word, but instead generally read horizontally across the top of the webpage, then in a second horizontal movement slightly lower on the page, and lastly scan vertically down the left side of the screen.[3] The Software Usability Research Laboratory at Wichita State University did a subsequent study in 2007 testing eye gaze patterns while searching versus browsing a website, and the results confirmed that users appeared to follow Nielsen's ‘F’ pattern while browsing and searching through text-based pages.[4] A group of German researchers conducted a study that examined the Web browsing behavior of 25 participants over the course of around one hundred days. The researchers concluded that "browsing is a rapidly interactive activity" and that Web pages are mostly viewed for 10 seconds or less.[5] Nielsen analyzed this data in 2008 and found that, on average, users read 20-28% of the content on a webpage.[6] Google Golden Triangle A technical report from Eyetools, DidIt and Enquiro, using search results from the Google search engine, indicated that readers primarily looked at a triangular area of the top and left side of the screen. This corresponds to the Nielsen F-shaped pattern, and was dubbed the Google Golden Triangle.[7] A recent 2014 Meditative blog[8] showed evidence of the declination of the Golden Triangle phenomenon since 2005 as users view more search result listings than before. Comparisons to reading printed text .mw-parser-output .hatnote{font-style:italic}.mw-parser-output div.hatnote{padding-left:1.6em;margin-bottom:0.5em}.mw-parser-output .hatnote i{font-style:normal}.mw-parser-output .hatnote+link+.hatnote{margin-top:-0.5em}@media print{body.ns-0 .mw-parser-output .hatnote{display:none!important}}Further information: Eye movement in reading Since the first notion of screen reading, many studies have been performed to discern any differences between reading off of an electronic device and reading off of a paper. In a 2013 study, a group of 72 high school students in Norway were randomly assigned into one of two groups: one that read using PDF files on a computer and one that used standard paper. The students were put through various tests involving reading-comprehension and vocabulary. The results indicated that those who read using the PDF files performed much worse than those reading off of a paper. A conclusion was reached that certain aspects of screen reading, such as scrolling, can impede comprehension.[9] However, not all experiments have concluded that reading from a digitized screen can be detrimental. The same year, another experiment was conducted on 90 undergraduates at a college in Western New York involving paper reading, computer reading, and e-book reading. Like the children in the Norwegian experiment, the students were tested for comprehension upon reading a number of passages: five focused around facts and information and the other five based on narratives. No significant difference was found between any of the different forms of reading for either type of passage. However, the researchers noted that due to the participants being college students who were accustomed to using technology, they may react differently to reading on electronic devices than older individuals.[10] A study conducted in 2014 by Tirza Lauterman and Rakefet Ackerman allowed subjects the option to choose between reading digitally or reading printed pages. The results found that those who chose to read digitally performed worse than those who used print. However, by practicing with PDF files, subjects that preferred to read on computers were able to overcome what researchers labeled as “screen inferiority” and managed to score just as well as paper readers, who did not improve with practice. Lauterman and Ackerman concluded that the study supported the idea that screen reading is shallower than paper reading, but that with practice the shallowness can be removed as an impediment. No conclusion has yet been reached among professionals regarding whether or not reading on a screen is significantly different than reading printed text.[11] Criticism Critics have voiced concerns about screen reading, though some have taken a more positive stance. Kevin Kelly believes that we are transitioning from "book fluency to screen fluency, from literacy to visuality".[12][13] Anne Mangen holds that because of the materialist nature of a printed book the reader is more engaged with a text, while the opposite is true with a digital text in which the reader is engaged in a "shallower, less focused way".[14][15] Nicholas Carr, author of The Shallows, says that “the ability to skim text is every bit as important as the ability to read deeply. What is… troubling, is that skimming is becoming our dominant mode of reading” (138).[16] Studies have shown that prolonged exposure to computer screens can have negative effects on the eyes, causing symptoms of computer vision syndrome (CVS) that include strained eyes and blurred vision. The occurrence of CVS has grown greatly over the past few years, effecting a large majority of American workers who spend over three hours a day on computers in some form.[17] See also Computer literacy References [1].mw-parser-output cite.citation{font-style:inherit;word-wrap:break-word}.mw-parser-output .citation q{quotes:"\"""\"""'""'"}.mw-parser-output .citation:target{background-color:rgba(0,127,255,0.133)}.mw-parser-output .id-lock-free.id-lock-free a{background:url("//upload.wikimedia.org/wikipedia/commons/6/65/Lock-green.svg")right 0.1em center/9px no-repeat}.mw-parser-output .id-lock-limited.id-lock-limited a,.mw-parser-output .id-lock-registration.id-lock-registration a{background:url("//upload.wikimedia.org/wikipedia/commons/d/d6/Lock-gray-alt-2.svg")right 0.1em center/9px no-repeat}.mw-parser-output .id-lock-subscription.id-lock-subscription a{background:url("//upload.wikimedia.org/wikipedia/commons/a/aa/Lock-red-alt-2.svg")right 0.1em center/9px no-repeat}.mw-parser-output .cs1-ws-icon a{background:url("//upload.wikimedia.org/wikipedia/commons/4/4c/Wikisource-logo.svg")right 0.1em center/12px no-repeat}body:not(.skin-timeless):not(.skin-minerva) .mw-parser-output .id-lock-free a,body:not(.skin-timeless):not(.skin-minerva) .mw-parser-output .id-lock-limited a,body:not(.skin-timeless):not(.skin-minerva) .mw-parser-output .id-lock-registration a,body:not(.skin-timeless):not(.skin-minerva) .mw-parser-output .id-lock-subscription a,body:not(.skin-timeless):not(.skin-minerva) .mw-parser-output .cs1-ws-icon a{background-size:contain;padding:0 1em 0 0}.mw-parser-output .cs1-code{color:inherit;background:inherit;border:none;padding:inherit}.mw-parser-output .cs1-hidden-error{display:none;color:var(--color-error,#d33)}.mw-parser-output .cs1-visible-error{color:var(--color-error,#d33)}.mw-parser-output .cs1-maint{display:none;color:#085;margin-left:0.3em}.mw-parser-output .cs1-kern-left{padding-left:0.2em}.mw-parser-output .cs1-kern-right{padding-right:0.2em}.mw-parser-output .citation .mw-selflink{font-weight:inherit}@media screen{.mw-parser-output .cs1-format{font-size:95%}html.skin-theme-clientpref-night .mw-parser-output .cs1-maint{color:#18911f}}@media screen and (prefers-color-scheme:dark){html.skin-theme-clientpref-os .mw-parser-output .cs1-maint{color:#18911f}}Wade, Nicholas J (2010). "Pioneers of Eye Movement Research". i-Perception. 1 (2): 33–68. doi:10.1068/i0389. PMC 3563053. PMID 23396982. [2]Alex Beam (2009-06-19). "I screen, you screen, we all screen". The Boston Globe. [3]Jakob Nielsen (2006-04-17). "F-Shaped Pattern For Reading Web Content". [4]Shrestha, Sav, and Kelsi Lenz. "Eye Gaze Patterns while Searching vs. Browsing a Website", SURL, Jan. 14, 2007, retrieved Feb. 19, 2016 [5]H. Weinreich et al. "Not Quite the Average: An Empirical Study of Web Use",[dead link] ACM Transactions on the Web, Vol. 2, No. 1, Article 5, Feb. 2008, retrieved Feb. 20, 2016 [6]Nielsen, Jakob. "How Little Do Users Read?", Nielsen Norman Group, May 6, 2008, retrieved Feb. 20, 2016 [7]"Google Search's Golden Triangle". Eyetools. Archived from the original on 13 January 2013. Retrieved 9 August 2015. [8]"Keeping an eye on Google – Eye tracking SERPs through the years" [9]Mangen, Anne; Walgermo, Bente R.; Brønnick, Kolbjørn (2013-01-01). "Reading linear texts on paper versus computer screen: Effects on reading comprehension". International Journal of Educational Research. 58: 61–68. doi:10.1016/j.ijer.2012.12.002. [10]Margolin, Sara J.; Driscoll, Casey; Toland, Michael J.; Kegler, Jennifer Little (2013-07-01). "E-readers, Computer Screens, or Paper: Does Reading Comprehension Change Across Media Platforms?". Applied Cognitive Psychology. 27 (4): 512–519. doi:10.1002/acp.2930. hdl:20.500.12648/2582. ISSN 1099-0720. [11]Lauterman, Tirza; Ackerman, Rakefet (2014-06-01). "Overcoming screen inferiority in learning and calibration". Computers in Human Behavior. 35: 455–463. doi:10.1016/j.chb.2014.02.046. [12]Kevin Kelly (2008-11-21). "Becoming Screen Literate". The New York Times. [13]Christine Rosen. "People of the Screen", The New Atlantis, Number 22, Fall 2008, pp. 20–32. [14]Anne Mangen (2008). "Hypertext fiction reading: haptics and immersion". Journal of Research in Reading. 31 (4): 404–419. doi:10.1111/j.1467-9817.2008.00380.x. hdl:11250/185932. [15]Mark Bauerlein (2008-09-19). "Online Literacy Is a Lesser Kind: Slow reading counterbalances Web skimming". The Chronicle of Higher Education. 54 (31): Page B7. [16]Carr, Nicholas (2010). The Shallows: What the Internet Is Doing to Our Brains. New York: W.W. Norton. p. 138. ISBN 978-0393339758. [17]Blehm, Clayton; Vishnu, Seema; Khattak, Ashbala; Mitra, Shrabanee; Yee, Richard W. (2005-05-01). "Computer Vision Syndrome: A Review". Survey of Ophthalmology. 50 (3): 253–262. doi:10.1016/j.survophthal.2005.02.008. PMID 15850814.
Wikiwand - on
Seamless Wikipedia browsing. On steroids.