Review of Users Who Use and Benefit from Closed Captioning or Its Variants

Grant Powell
School of Behavioral and Brain Sciences, University of Texas at Dallas
ACN 6763 Speech Perception
Professor Peter Assmann
December 13, 2022

Introduction

When most people think of the words “closed captioning” they may automatically think that it mainly benefits listeners who are Deaf and hard-of-hearing (HOH). Surprisingly, this could not be further from the truth. While it is true that closed captioning was initially designed for listeners who are Deaf and HOH to improve their listening experience while viewing shows on television, its usage is gradually becoming utilized by listeners who are not Deaf and HOH (Gernsbacher, M. A., 2015).

There are many situations where the usage of closed captioning by listeners who are, or not, Deaf and HOH has been implemented. One has been for educational purposes where children and adults are learning a foreign language. A second one has been for healthcare settings where a patient must decide whether to provide informed consent for medical privacy or approval to proceed with some form of treatment. A third one has been for air travel where pilots must listen to the radio for important instructions for air traffic control. Finally, a fourth one has been for telecommunication purposes where people are having to converse through a phone or teleconference medium or view recorded media.

In most of these situations, closed captioning is there to serve the purpose of alleviating problems with speech intelligibility, which may, broadly, refer to various levels of intelligibility involving comprehension, recognition, identification, and recollection of speech. Although, in the case of language learning, speech intelligibility through the help of closed captioning may aid in the production of speech such as pronunciation or articulation (Gernsbacher, M. A., 2015). Lately, closed captioning has become, somewhat, more accessible in adverse speech situations to not only Deaf and HOH listeners.

There has been an increase in the implementation of captioning software, which have broadly been identified under various names such as speech or voice recognition technology or real-time speech-to-text software. This technology has revealed itself quite a bit on many telecommunication devices such as laptops, mobile phones, and electronic tablets for teleconferencing, recording lectures and meetings for notetaking, or viewing recorded media. Closed captioning is not as enclosed and hidden from public visibility, hence the name closed captioning, as it often was when watching television back in the 1980s and 1990s.

It used to be, during those times, that one would have to maneuver a convoluted set of menu options on the remote control or television to find access to it and turn it on. Furthermore, when closed captioning was first introduced, and before the availability and advancement of captioning software, it usually required a person who was a captioner to transcribe and edit the text that was to be provided for television shows, beforehand. Nowadays, it can be found easily on recorded media if the video someone is watching has closed captioning included, or closed captioning technology embedded within it, by searching for the “CC” icon on the screen of the video and clicking it. According to captioning companies such as 3PlayMedia, providing closed captioning through captioning software has been considered more cost effective amongst most companies than having to hire an outside captioning service where a team of captioners do extensive editing and transcribing (Klein, R., 2022). Despite some of the progress that has been made in ensuring accessibility to closed captioning, it is still not as accessible to everyone as it should be.

Because most people still consider closed captioning to be intended mainly for listeners who are Deaf and HOH, the benefits of closed captioning for non-Deaf and HOH listeners are not as well understood because empirical evidence has often been published across separate topics such as Deaf education, second-language learning, adult literacy, and reading acquisition instead of being standardized for better public visibility from a publication perspective (Gernsbacher, M. A., 2015). This is one reason why access to closed captioning is still lacking. Another reason is that many video and content creators are still naïve about the legal government mandates requiring closed captioning and the empirical benefits that come with closed captioning for listeners who are, or not, Deaf and HOH (Gernsbacher, M. A., 2015). Since most closed captioning, as mentioned earlier, is generated automatically through captioning software such as speech recognition technology (SRT) instead of being transcribed, solely, through an outside captioning service, the gap between inaccessibility and accessibility to closed captioning has lessened to an extent.

Due to that convenience of using captioning software at the expense of saving costs, the quality of closed captioning has been lacking because, while captioning software has made significant gains and has been viewed as a more cost-effective option, it is still not on the same level as captions transcribed and edited by outside captioning services in terms of presentation accuracy and clarity (Gernsbacher, M. A., 2015). However, the captioning software, Real-Time Text (RTT), has made enough gains to where it has been fully embraced by the Federal Communications Commission (FCC) as a permanent replacement for teletypewriter (TTY) (Tinio, R. F., 2018).

Closed Captioning Usage by Airplane Pilots

Because the FCC has decided to transition from TTY to RTT on a permanent basis, it has been the hope of deaf pilots and pilots of the Deaf Pilots Association that RTT will make its way into everyday radio communications between pilots and air traffic controllers (ATCs) (Tinio, R. F., 2018). This has been the hope for quite a long time because when deaf pilots get their pilot certificate, granting them legal permission to fly, privately, they can only land and takeoff from airports that do not have control towers or require radio communication (Tinio, R. F., 2018). On their certificate, it states it is “Not Valid for Flights Requiring the Use of Radio” (Tinio, R. F., 2018). Before control towers were built and radio communication was required in the early 1930s, due to significant increase in air traffic and growing unreliability of alternative communication methods, deaf pilots were able to fly, freely, by relying on visual flying rules (VFR) (Tinio, R. F., 2018). VFR required that pilots fly in appropriate weather conditions where there is sufficient daylight clear of clouds and to stay clear of clouds at a specific distance to ensure they can be able to see and avoid other airplanes (Tinio, R. F., 2018). Along with VFR, deaf pilots were also able to fly, freely, by communicating with ATCs using colored flags and light gun signals (Tinio, R. F., 2018). The limitation placed on their certificates is not entirely discouraging as there are more than 18,000 airports in the United States that are uncontrolled and 512 airports that have control towers, but it still makes it a little bit difficult to plan their flights, ahead of time (Tinio, R. F., 2018).

They must make sure to avoid certain airports, areas, or airspaces that require radio communication, but if they want to land or take off from an airport that has a control tower and requires radio communication, they must either have a co-pilot or certified flight instructor (CFI) who can handle radio communications or use light gun signals by planning ahead with an ATC (Tinio, R. F., 2018). Despite certified deaf pilots being unable to fly into airports with control towers requiring radio communication without the assistance of a co-pilot and CFI, there are changes being made to the future mode of radio communication that may soon change their circumstances.

 The Federal Aviation Administration (FAA) has been making, or has made, a transition from the old, most recent, mode of radio communication to a newer mode of radio communication with a Data Communications (Data Comm) system that is part of the Next Generation Air Transportation System (NextGen) program (Tinio, R. F., 2018). NextGen was reported in the news by the National Public Radio (NPR) back in 2016 (Naylor, B., 2016). Data Comm is expected to rectify communication issues between pilots and ATCs such as those being lengthy due to procedures and causing comprehension difficulties and errors from miscommunication (Tinio, R. F., 2018). The procedures for communication between pilots and ATCs often involve identifying the sender and receiver of the message, comprehending what was said and received in the instructions, and reading back the instructions to ensure they were understood and received (Tinio, R. F., 2018). Sometimes, errors can occur from the communication procedures through miscommunication due to language barriers, various accents, and cultural differences in the presentation of the instructions and, thus, be detrimental to flight safety (Tinio, R. F., 2018). It is unsure if all air traffic control instructions, traffic advisories, and weather information will be transmitted to the cockpit, digitally, with technological projects such as an aeronautical datalink and ADS-B (automatic dependent surveillance broadcast), which have been tested in commercial jets and expected to be included in general aviation aircraft within the next decade (Tinio, R. F., 2018). But when NPR first reported about NextGen and talked about Data Comm, it reported on one specific feature of Data Comm.

It reported on a text messaging system for all air traffic communication to improve communication between hearing pilots and ATCs (Naylor, B., 2016). How it works is that ATCs and pilots exchange information, electronically, by texting each other (Naylor, B., 2016). This is like closed captioning, but with differences in the method of presentation and delivery. While most pilots are not deaf and HOH, this was a very essential policy measure to implement, especially, considering air traffic control towers at most major metropolitan airports can be very chatty places.

The chatter on the radio can affect an aircraft pilot’s speech intelligibility, as mentioned earlier, because there are various numbers, phrases, and words commonly found in pilot-speak contained within the instructions received from ATCs designed to efficiently spell out the routes pilots need to get to their destinations (Naylor, B., 2016). What makes a text messaging system even more vital for situations like this is that if a pilot mishears the instructions, they received from the ATC after carefully reading back what was heard, then the pilot must go through the process all over again (Naylor, B., 2016). Route information must always be communicated to pilots by ATCs while a plane is on the taxiway because changing weather patterns involving, say, thunderstorms immediately showing up on the radar can cause the anticipated route to be changed before receiving clearance to takeoff (Naylor, B., 2016). Apart from flight safety concerns due to miscommunication, this can also end up taking several minutes by delaying departures, burning fuel, and emitting carbon dioxide emissions while sitting on the taxiway waiting to takeoff (Naylor, B., 2016). Pilots and administrators have noted the benefits of this text messaging system.

According to NPR, a United Parcel Service (UPS) pilot of a Boeing 767 has claimed that he is able to process route clearances, route changes, and frequency changes through the text messaging system (Naylor, B., 2016). Since him and other pilots normally have gotten most of their messaging through voice on a radio frequency while taxiing towards the runway, they are now able to get their messages in a text-message-like format on a screen near the center of the console of the cockpit (Naylor, B., 2016). This allows him and his colleagues to view the message and reprogram their computers in seconds, which used to take minutes to process when they received messages by voice (Naylor, B., 2016). This benefit is very important because he pointed out that when receiving messages by voice it would get to a point where at a busy metropolitan airport there would be 30 or 40 airplanes waiting in line during a weather event (Naylor, B., 2016). Add in a couple minutes comprehending the messages multiplied times 40 aircraft and it, easily, led to over an hour in delays (Naylor, B., 2016). The text messaging system is expected to cut down on traveling delays because an Assistant FAA Administrator said it should cut down on situations where passengers have missed their connecting flight or the shoes someone ordered did not arrive on time by making the whole system feel more stable and predictable and not have to sit on a plane wondering what is going on and when someone will get off the plane upon arrival (Naylor, B., 2016). Another area where difficulty communicating occurs not only among Deaf and HOH listeners, but also non-Deaf and HOH listeners, is in medical settings.

Closed Captioning Usage in Medical Settings

Spehar et al. (2016) conducted a study that looked at how effective speech-to-text (STT) software could be in patient-to-physician encounters and what benefits it could provide in helping patients when making informed consent decisions. This was done by determining if STT could improve patients’ recall of their informed consent decision-making and whether this approach could improve physicians’ communication skills, professionally, and for training purposes with patients, by conducting a post-assessment analysis of the transcripts of their conversations with them (Spehar et al, 2016). One of the reasons they tested this is because enough advancements in STT have been made for it to be used in physician-to-patient encounters, especially, in anesthesiology where regional anesthesia and its associated risks that comes with certain anesthesia techniques are not well understood by patients (Spehar et al, 2016). A second reason is that the real-time captioning that STT provides has the potential capability of improving Deaf and HOH listeners’ abilities to comprehend and later recall conversations with physicians in other medical disciplines besides anesthesiology (Spehar et al, 2016). Finally, a third reason is that speech-to-text devices that incorporate STT could be used to improve comprehension in Deaf and HOH listeners, in general, and that its benefits are probably most readily available to be implemented in medical settings from a policy perspective (Spehar et al, 2016). The results of the researchers’ study led to some insightful conclusions they made about STT as it applies to patient-to-physician encounters in medical settings.

The study’s small sample consisted of twelve older adult subjects with hearing loss, and without STT, they performed just as worse as younger subjects in previous studies in retrieving the information presented to them by their physicians (Spehar et al, 2016).  But, with STT, the researchers found that the subjects performed better at recalling the information presented by their physicians (Spehar et al, 2016).  Despite recall being much poorer than recognition, the patients were able to recall nearly half again as many key items from the information presented to them with the assistance of STT than without it (Spehar et al, 2016).

This shows that STT can substantially benefit Deaf and HOH listeners’ ability to recall information such as the kind presented to them by a physician in a medical setting by acting as a valuable supplement to the spoken word (Spehar et al, 2016).  The reasoning is that for listeners who are, or not, Deaf and HOH the text of a physician’s most recent utterance can function as a short-term memory store when trying to sort through the confusion that comes with understanding unfamiliar medical terms and concepts (Spehar et al, 2016). The researchers believed that this function is the reason why the Deaf and HOH subjects in their study benefitted, significantly, from STT (Spehar et al, 2016). Although the study had a small sample size and fewer patient-to-physician encounters, a controlled, simulated environment instead of a clinical setting, and subjects who volunteered who may not necessarily be fully representative of Deaf and HOH patients encountered in a clinical setting, the researchers still stood by their findings including other findings they made that could be applied to other areas of patient-to-physician encounters that do not necessarily have to involve Deaf and HOH listeners (Spehar et al, 2016).

There are two other areas where the application of STT could be beneficial in patient-to-physician encounters. One area is with physicians from a physician education perspective (Spehar et al, 2016). The transcripts that are automatically produced by STT could be used by physicians to evaluate how well they communicated with patients. This is considering that from the results of the study the physicians were more enthusiastic about the use of captioning than the subjects were in patient-to-physician encounters with average scores of 5.1 and 5.5 by the physicians and 4.1 and 3.5 by the subjects on post-encounter questionnaire items (Spehar et al, 2016). The physicians’ ability to read what they said on the screen of the device utilizing STT not only allowed them to correct any errors made in the transcriptions by the device, but it also allowed opportunities for them to reflect upon what they said and how they said it to help guide their conversations with the patients and hone their communication skills (Spehar et al, 2016). A second area is with patients who speak different languages and have difficulty communicating with a physician (Spehar et al, 2016). Language translation software has improved enough to where it could be used between two different speakers, the patient and physician, who each speak a different language by providing simultaneous translation between speakers of different languages through the utilization of STT (Spehar et al, 2016). However, this potential benefit to speakers with different languages, or communication barriers, does not necessarily have to be specific to medical settings it could also extend into another area, educational settings.   

Closed Captioning Usage in Educational Settings

In educational settings, captions can be used to benefit language learners, children, or adults, who are learning either a primary or secondary language (Gernsbacher, M. A., 2015). Captions provide opportunities for language learners to analyze the words that appear on the screen as they are aurally processing the language by using the visualization of the words to determine linguistic units such as phonemes, morphemes, lexemes, syntax, and context along with grammar, semantics, and pragmatics (Gass et al, 2019). What this does is it helps the learners map the aural speech stream to individual, or meaningful, words and phrases (Gass et al, 2019). Gass et al (2019) investigated in their past studies whether second language (L2) learners utilized captions as a tool for listening comprehension in a classroom setting.

They found that captions helped them with listening comprehension by allowing them to segment speech streams into meaningful components (Gass et al, 2019). They extended the findings from those studies through a new study titled “How Captions Help People Learn Languages: A Working Memory, Eye-tracking Study” to investigate how captions helped with the L2 learners’ listening comprehension by investigating how individual differences influence the use of captions through an in-depth look into their working memory (WM) capacities (Gass et al, 2019).  It investigated two populations of learners, English as a second language (ESL) and English speakers learning Spanish (Spanish L2 learners), by observing for common patterns that emerged in their language-learning (Gass et al, 2019). Its results provide insights into the benefits that captions for language learning may provide and best practices and recommendations when implementing them for that purpose.

Gass et al (2019) found that the results answered the first research question of the study - “Does captioning aid comprehension?” - by showing that captioning did promote L2 learners’ video comprehension, thus, supporting the researchers’ past work. They also found that in answering the second research questions of the study - “What is the relationship between WM and L2 video comprehension? Do learners with high WM capacity comprehend more than learners with low WM capacity?” - the results showed that there was little effect of WM and L2 video comprehension for the Spanish L2 learners, but a medium effect of WM and L2 video comprehension for the ESL learners.

This may have been influenced by the possibility that the proficiency levels of the ESL learners were higher than the Spanish L2 learners because the recall scores of the ESL learners from a free-recall task following the watching of a short, captioned video were higher than the Spanish L2 learners (Gass et al, 2019). Gass et al (2019) theorized that there is a certain level of proficiency, or proficiency threshold, that must be reached before WM can play a factor in influencing the video comprehension of L2 learners. Basically, WM capacity does not seem to differ among certain types of learners when their proficiency level is lower because the effort in comprehending is going to be lower at a very basic level consisting of a word-to-word interpretation and it may be difficult to put individual words together to create a meaningful stream of speech (Gass et al, 2019). In examining the results to answer their third research questions - “What is the relationship between WM and caption-reading behavior? Are there differences in caption-reading behavior between learners with high WM and learners with low WM?” - Gass et al (2019) found interesting differences based on first-time (T1) viewing versus second time (T2) viewing of watching a video in a L2.

During T2 viewing, the reading behavior of the Spanish L2 and ESL learners was similar, however, the high WM groups reduced their caption reading time and the low-WM groups increased their reading time (Gass et al, 2019). To clarify, the researchers identified those learners in the Spanish L2 learners’ group for experiment one and ESL learners’ group for experiment two who performed better on the free-recall test and assigned them to the high WM group, whereas the rest of the learners were assigned to the low WM group of each experiment (Gass et al, 2019). Gass et al (2019) decided that they could not conclude with confidence of any correlation from this finding, but, instead, suggested that this was most likely the result of individual differences accounting for variation in information-processing during multimedia learning. For them to conclude with confidence that group differences were occurring, they would have needed a much larger sample size to help reveal greater differences between those with low and high WM capacity (Gass et al, 2019). In examining their results to answer the fourth research question - “What is the relationship between caption-reading behavior and L2 video comprehension? Are there differences in caption-reading behaviors between learners who demonstrate high video-comprehension and learners who demonstrate low video-comprehension?” - Gass et al (2019) found that language immersion may have played a role in the differences in results between the two learning groups.

They found that language immersion may have been the deciding factor in why the ESL learning group had fewer differences in how much time was spent reading the captions when watching the video between the high – and low – comprehension groups during T1 viewing versus T2 viewing (Gass et al, 2019).  Mainly, because the ESL learning group had spent more time gaining experience learning how to speak English in an English-speaking country and had developed a certain level of proficiency (Gass et al, 2019). Whereas, on the other hand, the Spanish L2 learning group had not and showed differences in time spent reading the captions amongst the two comprehension learners. The high-comprehension learners spent less time, overall, reading captions during T1 viewing and less time rereading than the low-comprehension learners (Gass et al, 2019). It was most likely that the high-comprehension learners were possibly depending more on the audio or from the visuals of the video than from the captions (Gass et al, 2019). As a result, the findings from the results that helped answer this fourth research question including the other three led to more findings that give insights into best practices and recommendations for implementing captions for L2 learning.

Based on the results, Gass et al (2019) recommends that it is best to expose L2 learners to captions when watching videos in a L2 because giving them more experience watching videos with the captions allows them to develop the ability to efficiently split their attention among multiple modes of input consisting of captions, audio, and other visual input. Having this balanced approach to using multiple sources of input will help enable them to parse and understand the incoming speech stream and aid in their learning (Gass et al, 2019). However, it is still important to consider a learner’s WM capacity and L2 proficiency level relative to a video’s content when implementing captions (Gass et al, 2019).

While L2 learners regardless of L2 proficiency used captions and those with higher WM, proficiency, and comprehension still read them, L2 learners with lower WM, proficiency, and comprehension tended to benefit more from using the captions because it acted more as a helpful processing aid relative to the difficulty level of the video (Gass et al, 2019). Captions acted in this way by helping those types of L2 learners with attentional control by acting as a useful salient, attention-grabbing piece of written information when aural information became too difficult and inaccessible for learning (Gass et al, 2019). L2 learners with higher WM capacity used the captions less than those with lower WM capacity, most notably, during T2 viewing (Gass et al, 2019). The reason is that they may have been able to hold more key information effectively in the episodic buffer store of their WM during T1 viewing and, thus, already were able to glean enough needed information from the captions the first time around (Gass et al, 2019). This is important to take note of when implementing captions as a L2 learning aid because, while captions most certainly aid comprehension, they must be chosen carefully by an educator to make sure they match the learners’ proficiency level (Gass et al, 2019). Otherwise, if a learner’s proficiency, comprehension, and WM is very high, then providing captions could end up, eventually, serving more as a nuisance (Gass et al, 2019). Besides learning languages in educational settings, another area where the implementation of captions is vital for Deaf and HOH and, occasionally, non-Deaf and HOH listeners is daily communication.    

Closed Captioning Usage in Daily Communication

The implementation and availability of closed captioning is essential to Deaf and HOH listeners for daily communication because communicating through technology has become more normalized (Zhong et al, 2022). As a result, the challenges of participating in telecommunication involving conversing on the telephone and teleconference platforms, and viewing recorded media, are growing because conversing on the telephone has become important for staying in contact with friends and family, for scheduling medical appointments, and participating in telehealth (Zhong et al, 2022). The novel coronavirus (COVID-19) pandemic has exacerbated these growing challenges, even more, through the increase in demand for teleconferencing by enforcing orders and recommendations to reduce in-person gatherings (Zhong et al, 2022). While it is very important to reduce the risk of infection and improve productivity by making work and communication between people more efficient, it may present communication difficulties for Deaf and HOH listeners, in general, and non-Deaf and HOH listeners in noisy environments. This presents a need for the implementation and availability of closed captioning including improvements in its delivery and presentation for daily communication (Zhong et al, 2022).

In a systematic review by Zhong et al (2002), it was found that the results of ten studies showed that text captions added to auditory signals has the potential to help listeners who are, or not, Deaf and HOH understand speech, which was consistent with the demonstrated benefits of providing text captions as asynchronous feedback for recognition of distorted speech. This supports the need to implement and provide availability and access to closed captioning including the findings that the benefits of text captions are both greater when the auditory signal integrity is low and in lower auditory conditions when the auditory integrity was manipulated (Zhong et al, 2022). However, the benefits of closed captioning for daily communication have been found to be very beneficial to older adults with hearing loss who are representing a growing population of listeners with hearing loss because the benefits of text captions do not seem to vary based on listener age (Zhong et al, 2022).

Although older adults with hearing loss are less able to benefit from visual speech information, also known as speech-or lip-reading, than younger adults, older adults benefit from text captions because it is believed that older adults benefit from the additional context that text captions provide (Zhong et al, 2022). Existing evidence has shown that older adults can benefit from context more so than younger adults (Zhong et al, 2022). Since older adults tend to visit and consult with medical personnel more often as they get older, the benefits of text captions that have been found for older adults is important. This is because it is suggested that the implementation of text captions in teleconferencing applications for telehealth appointments could, significantly, benefit those who have hearing loss who might have more trouble communicating in a telephone appointment or video-conferencing session due to reduced audibility (Zhong et al, 2022). However, there is still room for improvement in maximizing the benefits of closed captioning for daily communication in listeners who are, or not, Deaf and HOH through improvements in its delivery and presentation.

When Deaf and HOH listeners are communicating through telecommunication with the help of text captioning, certain text characteristics such as text integrity and timing, when presented and delivered efficiently, may enhance the benefits of text captions (Zhong et al, 2022). It has been found that as the delay time increased while text captions were being presented in response to a speech signal, the text caption benefit decreased (Zhong et al, 2022). This highlights the importance of fast automated speech-to-text algorithms when implementing text captions for telecommunication including policy changes in governmental guidelines that would help provide better guidance for creators of these algorithms (Zhong et al, 2022). Mainly, because with the FCC’s guidelines for text captions set at a recommended 125 words per minute for telephones with 98% accuracy, there have been some reports that the estimated speaking rate of conversations has been able to reach as high as 320 words per minute (Zhong et al, 2022). If speech-to-text algorithms are, mainly, set to meet the FCC guidelines by presenting text captions at 125 words per minute, then it would result in text caption delays relative to the auditory signal in those speaking situations that have a speaking rate higher than 125 words per minute, thus, reducing the benefits of text captions (Zhong et al, 2022).  

Conclusion

Overall, it is the hope that this review gives a general idea of how closed captioning and its variants such as RTT and STT has been and can be implemented in air travel, medical, educational, and daily communication settings to benefit listeners who are, or not, Deaf and HOH by alleviating problems with speech intelligibility. Moreover, it is also the hope that it provides readers in medical, educational, and daily communication settings with a guideline on some best practices and recommendations for implementing closed captioning and its variants in those settings. Before ending this review, some final notes need to be addressed on some of the details in the review’s content.

There either needs to be experimental research, or more of it, that is more accessible to the public that shows the benefits of the text messaging system of NextGen’s Data Comm system. There is a good possibility that there has been quite a bit of research on various features of the Data Comm system because, otherwise, this system as part of the NextGen program probably would not have been approved for use as a newer, more improved mode of radio communication by the FAA. After doing some digging on the FAA’s website and coming up empty, mostly, with a focus on looking for research studies done on the text messaging system of the Data Comm system, most of the evidence on the benefits of the text messaging system that could be found, so far, were mostly anecdotal evidence from reputable news sources such as NPR. Because this is all related to air travel, it would also be interesting to see experimental research, or more of it, if it exists, on the benefits of closed captioning, not necessarily, on television for watching entertainment in airports or airplanes because we have enough of that research as it applies to other settings, but on television or live screen devices that are geared towards helping listeners who are, or not, Deaf and HOH be informed about various changes to their flights such as gate changes, sudden emergencies, or other important announcements at airports that can make air travel less stressful (Collaborative for Communication Access via Captioning, 2016). The reason for this is because, according to 3PlayMedia, the United States Department of Transportation (DOT) issued a ruling that went into effect in 2015, which was an amendment to Section 504 of the Rehabilitation Act of 1973, requiring closed captioning on all televisions in US airports that receive federal funding and experience at least 10,000 flights annually (Griffin, E., 2015). Besides air travel, experimental research is also needed on the benefits of closed captioning in another area of medical settings, the operating room.

There needs to be experimental research, or more of it, if it exists, on the benefits of closed captioning in operating rooms of medical settings where surgeries or operations are planned. This is because medical personnel must wear masks in these situations and helpful audiovisual cues such as speech-and lip-reading that can, sometimes, be helpful for deciphering speech when communicating are non-existent. There have been instances appear in the news or news articles, occasionally, where Deaf and HOH medical students or personnel fulfill their medical training or perform their job duties in the operating room and utilize text captions on a television screen showing what each person in the operating room is saying. After all, communication in operating rooms is critical and necessary to avoid medical errors and mistakes. Finally, more final clarification is needed to address how research on closed captioning benefits learners learning their primary language, as mentioned in the section on closed captioning in educational settings, especially, in regard to reading skills.

The implementation of closed caption has been found to serve the same purpose in benefitting children’s reading skills in their primary language in much the same way as it does for L2 learners learning a L2 because learning to read also involves mapping sound and meaning onto text (Gernsbacher, M. A., 2015). It has, evidently, been found that watching videos with both audio and text captions can improve reading skills by helping non-Deaf and HOH children define content words heard in the videos, pronounce novel words, recognize vocabulary items, and make sense of what is happening in the videos (Gernsbacher, M. A., 2015). This has the potential benefit of growing and expanding the vocabulary of those young listeners who are not Deaf and HOH. Lastly, in India, after researchers encouraged India’s national television to begin captioning Bollywood music videos, it was found in a study assessed over a certain period that the literacy of adults who watched the videos increased more than those who very rarely or did not watch them (Gernsbacher, M. A., 2015).   

References

Collaborative for Communication Access via Captioning (2016). Captioning in transportation –

Air travel and more. http://ccacaptioning.org/captioning-transportation/

Gass, S., Winke, P., Isbell, D. R., & Ahn, J. (2019). How captions help people learn languages:

A working-memory, eye-tracking study. Language Learning & Technology, 23(2), 84-

104. https://doi.org/10125/44684

Gernsbacher, M. A. (2015) Video captions benefit everyone. Policy Insights Behav. Brain Sci.,

2(1), 195-202. https://doi.org/10.1177/2372732215602130

Griffin, E. (2015, August 10). US DOT officially requires closed captioning on airport TVs.

3PLAYMEDIA. https://www.3playmedia.com/blog/us-dot-officially-requires-closed-

captioning-on-airport-tvs/

Klein, R. (2022, July 25). What’s the true price of closed captioning services? 3PLAYMEDIA.

https://www.3playmedia.com/blog/how-much-does-closed-captioning-service-cost/

Zhong, L., Noud, B. P., Pruitt, H., Marcrum, S. C., & Picou, E. M. (2022) Effects of text

supplementation on speech intelligibility for listeners with normal and impaired hearing:

A systematic review with implications for telecommunication. International Journal of

Audiology, 61(1), 1-11. https://doi.org/10.1080/14992027.2021.1937346

Naylor, B. (2016, October 3). Air traffic controllers and pilots can now communicate

electronically. NPR. https://www.npr.org/sections/alltechconsidered/2016/10/03/4963937

87/air-traffic-controllers-and-pilots-can-now-communicate-electronically

Payne, B. R., Silcox, J. W., Crandell, H. A., Lash, A., Ferguson, S. H., & Lohani, M. (2022) Text

captioning buffers against the effects of background noise and hearing loss on memory

for speech. Ear & Hearing, 43(1), 115-127. https://doi.org/10.1097/AUD.000000000000    1079

Spehar, B., Tye-Murray, N., Myerson, J., & Murray D. J. (2016) Real-time captioning for

improving informed consent: Patient and physician benefits. Reg. Anesth. Pain Med.,

41(1), 65-68. https://doi.org/10.1097/AAP.0000000000000347

Tinio, R. F. (2018). Perceiving the communication methods between deaf pilots and air traffic

control (Publication No. 1464) [Doctoral dissertation, Purdue University]. Open Access

Theses