Special Issue: Building Bridges between Film Studies and Translation Studies

Signifying codes of audiovisual products:

Implications in subtitling for the D/deaf and the hard of hearing

By Ana Tamayo (Universidad del País Vasco UPV/EHU, Spain)

Abstract

Audiovisual products are complex multimodal constructs that produce meaning through the interaction of all sign systems delivered both through the acoustic and the visual channel, either verbally or non-verbally (Delabastita 1990; Chaume 2004; Gambier 2013). Due to the multimodal complex nature of audiovisual products, when creating subtitles for the D/deaf and the hard of hearing (SDH), the audiovisual translator faces the responsibility to be aware of the existence and understand the interaction of signifying codes of the visual and acoustic channels to create subtitles that are relevant to the target audience. Little has been said about signifying codes and their implications on accessible audiovisual translation (AVT) (cf. Tamayo and Chaume 2016), hence the present article seeks to fill the gap and suggests an interdisciplinary approach to the study and practice of SDH that takes Film Studies and Translation Studies into account. It focuses on how signifying codes of audiovisual texts might affect subtitling decisions, taking into consideration the needs of the D/deaf and hard of hearing (DHH) audiences (for example, the need of making explicit in the subtitles some sound elements or the need for an adequate subtitling speed) as well as technical aspects and formal restrictions (Martí Ferriol 2010) of this AVT mode. Since the target audience of SDH has limited or no access to sound, special attention is dedicated to the implications of signifying codes in the acoustic channel (linguistic code, paralinguistic code, special effects code, musical code and sound arrangement code) and how their meanings and interactions with other codes can be conveyed in the form of subtitles for the DHH audiences. Although the approach in this article is mainly theoretical, the possible subtitling solutions are illustrated with real examples or possibilities mentioned in previous research and publications that include some creative subtitling options.

Keywords: audiovisual translation, signifying codes, subtitling for the Ddeaf and hard of hearing, sdh, film studies, creative subtitles

©inTRAlinea & Ana Tamayo (2017).
"Signifying codes of audiovisual products: Implications in subtitling for the D/deaf and the hard of hearing"
inTRAlinea Special Issue: Building Bridges between Film Studies and Translation Studies
Edited by: Juan José Martínez Sierra & Beatriz Cerezo Merchán
This article can be freely reproduced under Creative Commons License.
Stable URL: https://www.intralinea.org/specials/article/2249

1. Introduction: Film semiotics and SDH

In the audiovisual translation field, it is long widely accepted that audiovisual products are multimodal and multisemiotic products that construct meaning through the interaction of two channels (acoustic and verbal) and their signifying codes (Delabastita 1990; Chaume 2004; Gambier 2013). Audiovisual texts, thus, are not conceived as the mere addition of those channels and their signifying codes, rather their meaning and identity arise from the interaction and cohesion of all acoustic and visual elements of a film. Although more signifying codes might be found in audiovisual products, the following table shows the eleven codes with more implications in audiovisual translation (AVT):

Acoustic Channel

Visual Channel

Linguistic code

Iconographic code

Paralinguistic code

Photographic code

Musical code

Mobility code

Special effects code

Planning code

Sound arrangement code

Graphic code

 

Syntactic code

Table 1. Signifying codes in AVT (Chaume 2001 and 2004).

DHH people have limited or no access to one out of two information channels and five out of the eleven signifying codes shown in Table 1. But they do not only lack access to half of the film, rather more than half, as they cannot have full access to the cohesion and coherence of the acoustic channel and its signifying codes with the visual channel and with the plot of the film. These terms are key to understand the implications of signifying codes in SDH and are defined in the discipline as follows: ‘coherence is a property of texts that are well written, and helps the message come across, whereas the term cohesion refers to the techniques writers have at their disposal to promote such coherence’ (Díaz Cintas and Remael 2007: 171). In the case introduced in the present article, cohesion and coherence go beyond the concepts traditionally linked to written discourse, as they take into account intersemiotic coherence and cohesion (Chaume 2001) that ‘refers to the way it [intersemiotic cohesion] connects language directly to the soundtrack and to images on screen, making use of the information they supply to create a coherent linguistic-visual whole’ (Díaz Cintas and Remael 2007: 171). Hence, a DHH person is not only loosing what can be heard in a film, but also how that information interacts with the visual and with the film identity as a whole.

Traditionally, two AVT modes have been used to provide the DHH audiences with the acoustic information to try and make audiovisual products accessible, namely subtitling for the D/deaf and the hard of hearing (SDH) and sign language interpreting. From these two, SDH, either with live or pre-recorded captioning, has been the main method to achieve such accessibility, as it is known to be more cost-effective and it reaches a wider audience that comprises not only D/deaf signing people, but also hard of hearing people with no knowledge of sign language and other communities with no hearing loss, such has language learners.

Although its main goal is to explicitate acoustic information, little has been said about how SDH can (or cannot) help restoring the cohesion and interaction that is inevitably lost by the lack of access to the acoustic channel. In other words, it seems evident that acoustic information (music, sound effects, paralinguistic features…) interacts with visual elements (by means of redundancy, for instance) to present a coherent and cohesive product that has meaning as a whole. Substituting such acoustic information with written discourse could mean losing the coherence and cohesion of the original product, and, ultimately, creating new interactions with the image that could lead to a different type of cohesion.

Furthermore, SDH is not only about the acoustic information, since the presence of subtitles in a film adds a visual element to the original product. Thus, the visual channel is enlarged with what could fit in the graphic code as external subtitles, which were not part of the original product. This addition inevitably changes the visual identity of the film and, consequently, its whole identity. Adding a visual element that was not part of the original product with the aim of overcoming the limitations to the acoustic channel, inevitably, changes the way signifying codes interact with each other and, thus, a new cohesion and a new identity are created. For example, in a scene with no dialogues, a certain atmosphere could be created by the interaction of music or sound effects with the shot changes, lighting or characters’ movements. There might be certain cohesion that aims at drawing attention to specific visual elements in the scene to create specific reactions or expectations in the viewer. Adding external subtitles to explicitate sound or music would modify viewer’s attention and thus, the film’s identity, in terms of reception, could be different.

In the following sections, the implications of the different signifying codes in the practice of SDH are explored in depth, but also how the addition of SDH might have implications on how signifying codes are received. It will be argued that implications of signifying codes in SDH happen in both directions. On the one hand, since DHH people have limited access to sound, the information provided by the acoustic channel has implications on how it can be conveyed into subtitles. On the other hand, since subtitles appear on screen, they might have implications on the visual channel, on how it is perceived and on how the film changes to a new product, with a different cohesion and identity.

2. The acoustic channel

From the two information channels in audiovisual texts, it seems obvious that the acoustic channel has far more implications in the way SDH is displayed. As it happens in all types of AVT, from all signifying codes presented in Table 1, the linguistic code is the one with more implications in SDH. Even though SDH is usually intralingual (either because the original dialogues are in the same language as subtitles or because subtitles are created from the dubbed version), it does not mean that the lack of change of language involves fewer implications. The way DHH people understand written discourse, especially profoundly prelocutive Deaf people, means that subtitles have to be conveyed in a special way. Neves (2005: 97-98) points out that ‘for hearers, reading comes as a natural bi-product of the primary auditory based language acquired during the early years of infancy’, but the Deaf have this ‘visual’ reference base for the reading process that is not necessarily complemented with an auditory reference system (Neves 2009). DHH people, thus, have more difficulties to relate written discourse to sound, which seems to aid in the conversion of text into meaning (de Linde and Lay 1999). Rather, they tend to recode written text into an intermediate representation (lip-reading, fingerspelling, sign language, for instance) (de Linde and Kay 1999) to understand written discourse.

There are mainly two parameters that have been proven key in the processing of subtitles, namely vocabulary and syntax (cf. Neuman and Koskinen 1992; Kelly 1996; Koolstra et al. 1997; Neves 2009; Zárate 2010b and 2014; among many others). These parameters are even more determining in the comprehension of subtitles by the DHH audiences, as short-term memory and heterogeneity of the DHH community might impose limitations in the comprehension of subtitles (Neves 2005 and 2009). Moreover, these two variables do not operate separately (Kelly 1996), rather ‘the relationship between vocabulary and reading comprehension is dependent on syntactic abilities’ (Kelly 1996 in Zárate 2010a: 167). Although syntactic abilities are needed to comprehend linguistic code, only vocabulary seems to improve with usual exposition to subtitles and syntax remains as the main challenge to understand written discourse (Domínguez and Alegría 2010 and Domínguez et al. 2014). Domínguez and Alegría (2010) and Domínguez et al. (2014) point out that DHH people, even when they are expert readers, rely on the key word strategy to comprehend written messages, that is, they rely mainly on word semantics to comprehend written messages because they generally lack of the syntactic abilities needed to comprehend written discourse. This situation leads DHH audiences to a never-ending circle, since they do not comprehend complex syntax, they rely on vocabulary to understand written messages and, as a result, they cannot improve their syntax. Therefore, the subtitler faces the challenge of deeply analysing the linguistic code in terms of syntax and vocabulary to understand what needs to be adapted in order to be comprehended by DHH audiences.

In addition, the speech delivery rate must also be taken into account, since one of the main formal restrictions in SDH has to do with subtitling speed. A lot has been said about subtitling speed for the DHH (cf. de Linde and Kay 1999; Neves 2005 and 2009; Burnham et al. 2008; Lorenzo 2010b; Romero-Fresco 2010; Tamayo 2016; among many others). Despite the lack of consensus among researchers and between these and deaf peoples’ associations and guidelines, there is no doubt that a reduction of the subtitling speed (when compared to subtitling for hearing audiences and to standard speech rates in different languages) is needed in order to provide legible subtitles for the DHH audiences. Considering all this, it seems clear that edited subtitles, rather than verbatim, would increase reading comprehension. Nevertheless, deaf peoples’ associations and the industry still advocate for verbatim subtitles (Neves 2008; Romero-Fresco 2010, Szarkowska 2010) even though they might impose ‘a punishing reading load on the viewer’ (Lambourne 2006).

The never-ending debate between edited and verbatim subtitles in SDH is also linked to another main implication of the linguistic code in this AVT mode, namely, the intralingual vulnerability in SDH. It is widely known that DHH people rely greatly on lip-reading either to complement their residual hearing or to comprehend what is being said on screen (Neves 2009). Hence, the fact that all three —the original acoustic information, the images and the intralingual SDH— coexist in the same audiovisual product, makes it inevitable to compare them. In this sense, hard of hearing people might compare all three types of information (the acoustic; the visual, i.e. lip movements; and SDH) and the profoundly deaf people with no residual hearing and no hearing devices (such as cochlear implants or hearing aids) might as well compare the visual to the SDH. As it happens in interlingual subtitling for hearing audiences, this characteristic might bring up criticism from spectators. However, implications go beyond criticism in SDH, as lip-reading is one of the main strategies to comprehend acoustic linguistic code for DHH people, and differences arousing from formal restrictions in subtitling and from the need to simplify syntax and vocabulary to adapt to the reading skills might be leading to complications in the understanding of the linguistic code. Thus, special consideration must be given to those film sections in which SDH might be compared to lip movements when dealing with intralingual subtitling from an original (not dubbed) film. In such cases the subtitler might decide to opt for a more verbatim and less edited SDH to complement the visual information.

Nevertheless, implications of the acoustic channel in SDH go far beyond the linguistic code. The paralinguistic code deals with all the information that can be perceived through the voice that does not comprehend words. It is widely accepted that this is one of the codes that needs special attention in SDH, as the way we say things might sometimes offer more information than the words themselves (Perego 2009). This code is sometimes referred to as contextual information (cf. AENOR 2012) and it comprises the acoustic information uttered by characters. It can be stated that there are two types of such acoustic information. On the one hand, we can find sounds uttered by characters that are not associated with the linguistic code (cough, cry, laughter, snoring…) and, on the other hand, we can find paralinguistic features of the voice that are associated with the linguistic code (pitch, rhythm, intonation…) and that might have implications on the way we perceive the linguistic code. The first type of contextual information is easier to deal with in SDH, as it might be considered a more unbiased type of paralinguistic information. When dealing with sounds uttered by characters, the subtitler must consider redundancy and relevance to decide whether to explicitate them or not. If a character is seen on screen laughing or coughing, for example, there will be no need to explicitate such sound, as it would be redundant with the visual and will not add information.

Regarding the second type of contextual information, the paralinguistic features of the voice and their meaning, it is generally biased information the subtitler will be dealing with. Firstly, the subtitler must reflect on the meaning of such information. For example, a change of pitch might mean mockery, happiness or politeness depending on the context, or a specific intonation might mean sarcasm or irony. Secondly, as it happens with unbiased sounds, the subtitler must consider whether there is a need to explicitate such information or not.

Linking to the previous idea of intralingual vulnerability and lip-reading, it can be stated that the lip-reading strategy, widely used by DHH audiences, might be useful not only to comprehend the linguistic code, but also to read cues that might help to decoding paralinguistic features of the voice. This also interacts with the mobility and planning codes in the visual channel, as certain lip movements could be expressing irony, surprise or anger, for instance. Furthermore, facial expressions or body gestures might also offer such information, and explicitation in the form of SDH might be redundant and unnecessary. In this sense, some audiovisual products might have a tendency to need more of such explicitation than others. For example, the visual channel in cartoons or animated films can be easily manipulated to achieve more informative facial expressions that add redundancy to the information received through the acoustic channel, and, therefore, might need less explicitation of paralinguistic features. On the opposite end, less visually expressive genres, such as puppets, might benefit from an increase of explicitation of the paralinguistic features, since characters appearing in such genre usually have no facial expression, have limited movements, and their mood or communicative intention might only be perceived through the paralinguistic features of their voices.

Be as it may, if the subtitler decides to explicitate such paralinguistic or contextual information (either unbiased sounds or biased paralinguistic features), in any genre or for any type of audience, s/he should reflect on the best way to provide such information, that is, on adequacy. The most common practice for such explicitation nowadays is the addition of informative tags in brackets (usually in upper case lettering, as can be seen in Image 2) that contain information about the sound or the communicative meaning of such feature.

When biased paralinguistic information is made explicit in SDH, the audience receives not the objective explicitation of that sound (for example, ‘with a higher pitch’), but the communicative meaning the subtitler infers from such sound (for example, ‘ironic’). Special caution must be exercised not to fall into a patronising attitude that explicitates every mood or communicative intention of characters. Furthermore, as DHH people tend to rely on visual information to complete communicative meaning, there is an urgent need to explore other ways of making paralinguistic features explicit in SDH. The usual solution, based on textual informative tags, has been proven to alter conventional reading patterns (Arnáiz 2015). In this sense, new creative solutions have already been supported and tested in preliminary studies and practices by different authors (cf. McClarty 2012 and 2014; Tamayo 2015; Fox 2016; Romero-Fresco, forthcoming). Such solutions might include extratextual resources, such as emoticons (Tamayo 2015) for example, or intratextual resources, such as different font types or the use of blurriness, transparency or font size (McClarty 2012 and 2014; Fox 2016; Romero-Fresco, forthcoming). All these alternative solutions might be beneficial to explicitate moods, emotions or communicative intentions that are only completely perceivable thanks to the access to the paralinguistic code. Although some preliminary studies have been conducted to explore the usefulness of such features, research is still scarce and there is a need to explore how these different solutions might affect comprehension and enjoyment of captioned audiovisual products and how different audiences receive them.

In audiovisual products, special effects code and musical code are usually perceived effortlessly by hearing people (Neves 2009). Musical code is commonly divided into two types in SDH, namely, music and songs. The former usually, but not always, refers to extradiegetic background music that helps creating atmosphere in the different scenes while the latter usually, but again not always, consists of diegetic music whose lyrics are important to the plot of the audiovisual product. Background music is usually omnipresent for hearing audiences, who have learned to process it without effort and without paying attention to it. But needless is to say that background music, even though its lyrics are not relevant to the story or when it is extradiegetic, is not chosen randomly. In The Wonder Years (Carol Black and Neal Marlens 1988-1993) for example, when Winnie Cooper appears on screen a theme is associated to her image. In episode 3 in season 2 (entitled ‘Christmas’), Kevin Arnold, the main character, finds himself in a mall trying to find the right perfume for Winnie. When he finally does, the audience knows he has found it because when he smells it, Winnie’s theme can be heard in the background. A few seconds later, Winnie’s image appears on screen, but such image is redundant with the acoustic channel, since the audience has learnt to link that music to the character. Music, despite being extradiegetic plot music with no lyrics in this case, can substitute words or visual information and offer relevant information to the storyline.

As with the rest of signifying codes, the first step when dealing with music in SDH, either with background music or plot music, is to decide if it should be made explicit or not. Once again, it is the subtitler’s task to decide if music is redundant with the visual information, as in the case presented above, and if it is necessary to make it explicit or, on the contrary, if it is preferable to offer some time without subtitles to let DHH audiences rest from reading and process and enjoy the images. If the subtitler decides it should be made explicit, it is then time to think about the best ways to convey it into subtitles. When background music is heard, SDH is usually available with one of the following contents: explicitation only of the fact that there is music or explicitation some relevant information about the music, such as author, title of the song, type of music (rock&roll, pop…) or feeling associated with music (romantic, scary…). Many authors have dealt with the function of diegetic and extradiegetic music in audiovisual texts and it is easily inferred that music can not only provide certain atmosphere to the scene, but also carry affective and expressive value, shape characters’ identity or link certain instruments to feelings thanks to the cohesion of the music with images, among others. Music is, therefore, not only vital to the aesthetics of a film, but also to its plot and meaning. Moreover, music ‘plays an important role in landmarking significant experiences and spaces in people’s lives’ either in hearing or in deafened audiences (Neves 2010: 124). Therefore, explicitation of feelings, title of the song or type of music might be essential to the repercussion of a film, as ‘recalling music may mean remembering its lyrics, its tempo or melody, or simply the context in which it was experienced’ (Neves 2010: 124) and ‘film is frequently the context that carries memorable music’ (Neves 2010: 124). Although further studies are needed to explore the best ways of conveying music into SDH (with different font types to express different genres, with intermittent subtitles that can express rhythm, with the use of transparency to express volume…) and the extent of its usefulness, it seems evident that making music, and above all, its characteristics explicit is vital to provide DHH audiences with real accessible SDH.

Every scene should be analysed individually to decide which of those elements is more relevant in each case. Regarding plot music or songs, lyrics are usually displayed at the bottom of the screen and a symbol is added (usually # or ♪; although the latter is preferable, the former is also used because it generates fewer technical problems) to indicate that what can be read are not dialogues, but music.

Although current practices and guidelines seem to be coherent, we also need to go one step further in research on this matter. To date, we count with no data about how subtitle form, regarding font type, colour, position, size, transparency, etc. can help identifying features such as type of music or feeling associated with music. We also count with no data about how subtitle display (in blocks or rolling, for example) can help, for instance, making explicit the rhythm of a song, and therefore the sensations that are linked to that rhythm. In the academic field, there is a need to keep up with current trends, as the audiovisual industry progresses to offer more visual solutions altogether (with the projection of 3D films, for instance). In this sense, research in audiovisual accessibility should also be at the vanguard exploring more creative solutions that help conveying the acoustic information into more visual solutions. There is a need to keep exploring the interaction between subtitles and the way they can enhance the visual experience in the consumption of audiovisual products (cf. McClarty 2012 and 2014; Fox 2016; Kruger et al. 2016; Romero-Fresco, forthcoming).

The next code to be analysed within the acoustic channel is the special effects code, which deals with the sounds that are not uttered by characters, such as a phone ringing, murmuring in a coffee shop or a dog barking, for instance. Regarding this code, it is also crucial to think first if the sound should be made explicit or not by reflecting on relevance and redundancy. If the subtitler decides a sound should be explicit, once again, s/he will need to think about the best ways to convey it into subtitles. This signifying code is usually made explicit with informative tags, either at the bottom of the screen, centred (in countries such as Poland, United Kingdom, France or Italy) or at another position (like in the case of Spain, where sound effects, and background music, are usually positioned at the top right corner). To make it verbally explicit, the subtitler has many options at hand such as ‘gerunds (barking), nouns (doorbell), verbs (laughs), nouns and verbs (they babble)’ (Zárate 2010b) and onomatopoeias (ring, ring), although different countries may have different norms imposing one of these options (in Spain, for example, the UNE Standard [AENOR 2012] recommends the use of nouns).

Although the use of onomatopoeias is quite underexplored and not very used in the practice (cf. Neves 2009; Zárate 2010a and 2010b), there are further solutions to make special effects explicit in a film that will also benefit from an increase of research, namely, the use of drawings or pictures that represent such sound. Whether they are conveyed with nouns, gerunds, verbs or onomatopoeias, authors such as Arnáiz (2015) or Neves (2005) question conventional tags (as seen in Image 2) and advocate for the use of more visual solutions that are more in line with the way DHH people interact with the world, as they tend to be very visually aware, might relate differently to the world of sound depending on their access to it and might have very different reading skills. To date, only anecdotic studies have been carried out to explore such solutions, although some authors already argue they could be useful (Neves 2005 and 2009; Civera and Orero 2010; Lorenzo 2010a; McClarty 2012 and 2014; Tamayo 2015). There is a need to keep exploring how the use of drawings (as shown in Image 1) might help facilitating comprehension and enhancing enjoyment, and how they interact with the visual information as a whole.

Fig1

Image 1. Exploration of the use of drawings to explicitate sound effects in Pocoyó (David Cantolla, Alfonso Rodríguez and Guillermo García Carsí 2004-2010) (in Tamayo 2015).

Last, but not least, the present section will analyse the implications of the sound arrangement code. This code deals with where the sound comes from. The most widely known and practiced solution, both in subtitling for hearing audiences and SDH, is the use of italics to indicate that a voice is not on the scene. But, again, there are other underexplored and under-researched solutions that should be taken into account. For example, the position of subtitles on screen might help indicate where the sound comes from (as shown in Image 2), and transparency of subtitles might help indicating that the voice is coming from far away and it is difficult to hear. Going one step further, if we decide to convey sound effects in the form of drawings or pictures, an arrow could also be added to indicate the origin of sounds, as suggested by Collins and Taillon (2012) for  videogames and shown in Image 3.

Fig2

Image 2. Use of different positions to indicate where the sound comes from
(The Fault in our Stars, Josh Boone 2014).

Fig3

Image 3. Symbols to explicitate where the sound is coming from (Collins and Taillon 2012: 13).

3. The visual channel

Traditionally, little attention has been drawn to this information channel probably due to two reasons: because it usually does not contain linguistic information that needs to be translated and because the manipulation of the visual channel is almost never possible. Nevertheless, the translator’s task needs to be seen as a task that goes beyond the dialogues in a film. In fact, a translation, adaptation or localization of the visual channel is to be seen already in some audiovisual products, mainly animated films. For instance, in the Japanese version of the film Inside Out (Pete Docter and Ronnie Del Carmen 2015) the visual references to broccoli were changed to green pepper, as they were meant to develop a feeling of disgust in the character and, in Japan, broccoli does not relate to such feeling. Moreover, audiovisual translation, adaptation or localization can also occur when there is no change of language. In that same film, a projection of a hockey match in the original version in the United States was changed to a projection of a football match in the United Kingdom. Both countries share the same language, but not the same culture. In other words, they do not share the way the see and interact with the world. An analogue situation can also take place in the case of SDH—although most DHH people nowadays have an oral language as their mother tongue (due to medical and technical advances in hearing devices, to integration policies and, most of all, to the fact that most DHH children are born to hearing parents), DHH audiences might not share with their hearing counterparts the way they see and interact with the world. Making a film accessible through SDH, thus, has to go far beyond conveying into subtitles which might or might not be in the audience’s mother tongue (i.e. the linguistic code) but adapting everything that can be heard and seen to the way DHH people interact and understand the world.

Be as it may, the truth is that the visual channel has implications in how we perceive a film. As shown in Table 1, there are six signifying codes in the visual channel relevant in AVT, and the fact that translators usually cannot manipulate the image does not imply that it is not relevant in the decision making process. Its significance, of course, has to do, among others, with the audience we are translating for. In this sense, it is obvious that, in audiovisual accessibility, the visual channel has many more implications in audio-description than in SDH, as the DHH usually, but not always, have full access to the visual channel. As stated by Tamayo and Chaume (2016) implications of the visual channel in interlingual subtitling for hearing audiences and SDH are mostly the same, with one main exception—when, in spite of redundancy of the acoustic and visual channel, the latter cannot be fully understood without the former. During the experimental study conducted by Tamayo (2015), DHH children were exposed to audiovisual content containing Image 4 and, although a kiss could be heard and the image was redundant with the sound, some of the children, when they were exposed to the image with no caption for it, answered that the character was smelling the bread. Although both information channels were redundant in that case, the audience needed explicitation of the sound to fully understand the message.

Fig4

Image 4. Reuben kissing a baguette in Lilo & Stitch: The Series (Dean DeBlois and Chris Sanders 2003-2006).

Thus, when dealing with the implications of all six codes involved in the visual channel, the subtitler must be aware of how redundancy, or the lack of it, can be understood by the target audience. Similarly to the acoustic channel, the first decision to make is whether what can be seen in the image has to be conveyed into subtitles. This task, however, is not easy, as the subtitler does not normally share the culture and background of his/her target audience. Although s/he might share the culture and background of the majority of the people who will access the product (hard of hearing or deaf people that identify with the world of the hearing audiences), s/he would probably always have the Deaf in mind (a minority that usually identifies with the Deaf community with their own language, signed, and their own culture), for whom the original product is less accessible. This fact is an idiosyncratic feature of SDH—the subtitler produces captions in his/her mother tongue and in the source language (as SDH is normally intralingual) of the audiovisual product, but s/he does not share the culture and background of the whole target audience. In this sense, it might be difficult to fully understand what is relevant and what is not for that part of the target audience that is most dependent on captions to understand the product. Moreover, the subtitles s/he is producing might not be in the mother tongue of the target audience (in the case of signing deaf people). This idiosyncrasy creates a gap between the subtitler and a part of his/her target audience and between the translation s/he produces and that audience, which is not normally the case in other types of audiovisual translation or other types of translation in general. To bridge this gap it is compulsory that the subtitler gets to know as well as possible the way his/her target audience communicates and understands the world and its acoustic and visual signs to provide relevant and adequate subtitles that meet the needs of the audience. If, finally, the subtitler decides to caption the visual, this has to be done in a way deaf people can understand. In this sense, it is not only about whether the visual has to be subtitled or not, but also about how it should be subtitled. Thus, concepts directly linked to specific formal restrictions in SDH, such as subtitling speed, must be taken into account not only when dealing with the linguistic code, but also when dealing with the signifying codes in the visual channel.

In addition, an eye-tracking study conducted by Jensema et al. (2000) with 23 DHH subjects concluded that they spent 84 per cent of the time reading subtitles and only 14 per cent on the image (2 per cent of the time they did not watch the video). These results imply that what can be seen by the audience who is watching the film in a non-captioned version might not be seen by the DHH because they might be too busy reading subtitles. In this sense, it is crucial to evaluate the relevance of the information transmitted by the visual channel and its interaction with the load of linguistic code to evaluate the need of reducing the subtitled linguistic code in favour of the time dedicated to understand the visual. A more recent study conducted by Romero-Fresco (2015) concluded that the faster the subtitling speed, the less time is spent on the images and the more time on subtitles. These results lead to think that the speed and load of linguistic code in form of subtitles should be in line with the importance of the visual information. In other words, in scenes in which the visual is crucial to understand the audiovisual text, the DHH audience might benefit from less and slower subtitles. Hence, although the visual channel might not have as many direct implications as the acoustic channel in the way SDH is conveyed, it is crucial to evaluate its importance in the storyline and the load of information it contains, since it could be key in the decision making process.

Regarding the signifying codes in the visual channel, in the planning code special attention must be drawn to camera shots in which lip-reading might be available for deaf and hard of hearing people. Extra care should be taken when subtitling close-up and extreme close-up shots most of all in original films with intralingual SDH (but also in dubbed versions, due to possibility of inferring paralinguistic information from lip movements). In the case of original films, deaf and hard of hearing people will probably rely more on lip-reading to complete the information received by the acoustic channel (should they rely on their residual hearing or on hearing devices) and by SDH. In those cases, the subtitler might decide to make use of verbatim, rather than edited, subtitles that allow redundancy, and thus comprehension, of the oral, the visual and the written information. The planning code might also be significant when deciding whether to subtitle paralinguistic information or not. Facial expressions or gestures that might complement or be redundant with paralinguistic features of the voice are undoubtedly more easily recognizable in medium close-up or close-up shots while, although present, might not be easily seen or recognizable in long shots. In addition, when no colour is assigned to a character for his/her identification, the planning code will be crucial to decide whether other character identification techniques, such as name tags, dialogue dashes or avatars (cf. Tamayo 2015) should be used. The planning code, thus, might be decisive when it interacts with other signifying codes in the acoustic channel.

In SDH, photographic, iconographic, mobility, graphic and syntactic codes, apart from the implications mentioned throughout the article and that have to do with relevance, redundancy and adequacy, do not have much more implications than subtitling for hearing audiences. These implications mainly deal with the need of changing subtitles to maintain coherence with the image and with the dynamism of the story (see Chaume 2004; Tamayo and Chaume 2016).

But there is another implication, which goes the other way around, that needs to be addressed when dealing with the concepts of cohesion and coherence of signifying codes and SDH presented above. It is not about how the visual influences the way SDH must appear, or even about the decision of whether it should be captioned or not; it is not about how the film’s visual identity influences the captioning, but how the captioning influences the film’s visual and whole identity. Providing SDH means adding a visual resource, an external graphic resource, which might change the film’s identity. To offer just one example, colours normally used to facilitate character identification in SDH add an extradiegetic sign that has implications in the iconographic code, as the chromatic effect of the image is changed in favour of the needs of the DHH. A film’s director, for instance, might have decided to use a certain range of colours (black and white, pastel colours) to create an aesthetic effect that might be disrupted by the use of colours to identify characters in a scene. Although this might not have, in most cases, major implications in the comprehension, it is most definitely relevant when dealing with the artistic and aesthetic effect of a film and might have implications in the way it is perceived, and therefore enjoyed, by viewers. 

Similarly to some of the alternatives presented for the acoustic code, these implications and further creative solutions are still underexplored. Authors such as McClarty (2012 and 2014), Kruger et al.(2016), Fox (2016) or Romero-Fresco (forthcoming) are dealing and experimenting with the use of creative subtitles that enhance the visual identity of audiovisual products. There is no doubt that subtitles, in any form, imply adding extra information on the screen. Although it has been proven that subtitles can be automatically processed (d’Ydewalle and De Bruycker 2007; Perego et al. 2010), processing them automatically does not necessarily mean that their presence does not interact with the image nor create a different type of cohesion between signifying codes of the visual channel and between the visual and the acoustic channel.

4. Conclusions

The acoustic channel, with its signifying codes, is, in fact and without a doubt the channel with most implications in how SDH can or should be displayed on screen. Here, the subtitler faces different challenges. The first challenge is deciding whether to caption what is being heard or not. The subtitler must reflect on redundancy and relevance of the information to decide if it is necessary or appropriate to provide the DHH with such acoustic information. Secondly, if s/he decides the information should be made explicit for the DHH audiences, s/he faces the challenge of achieving adequacy in the way that information is displayed. Up until recently, SDH has relied on purely textual solutions in which it was taken into account how DHH people process written information, but not how they might benefit from other solutions. In some signifying codes, such as the paralinguistic code, the standard solution based on textual information usually alters conventional reading patterns (Arnáiz 2015) which leads to think that there might be more useful ways of conveying the sound into SDH. Recently, under researched options that explore more visual solutions have been studied. Although results are preliminary due to the novelty of such solutions, one thing seems clear —research, and practice, should move towards more creative solutions that are in line with the way DHH people interact with the world, that is, visually, in order to provide adequate and relevant SDH for them.

The visual channel has also implications in the decision making process in SDH. Concepts such as relevance, redundancy and adequacy have to be put into practice in order to decide whether visual information should be subtitled or not and what are the best ways to convey it into SDH. To achieve fully relevant and adequate subtitles, the subtitler needs to know his/her audience’s needs and expectations. The planning code is probably the visual code with most implications in SDH, as it can interact with the lip-reading strategy used by most DHH people to understand dialogues; with the paralinguistic code or even with the need to identify characters by visual means. Moreover, it might also be crucial to decide whether visual and acoustic information is redundant or not. Be as it may, the most underexplored implication, and yet very meaningful, within the visual channel goes the other way around. Adding SDH to an audiovisual product means changing the visual channel, its codes and the way they interact with other codes to achieve cohesion, coherence and meaning. Although research on this matter is still scarce, recent studies on this field are offering interesting results that go in line with the new trends in the audiovisual production field and that might make us rethink and revaluate the concept of user experience regarding SDH.

References

AENOR (2012) Norma UNE 153010: Subtitulado para personas sordas y personas con discapacidad auditiva. Subtitulado a través del teletexto. Madrid, AENOR.

Arnáiz, Verónica (2015) “Eyetracking Tests in Spain” in The Reception of Subtitles for the Deaf and Hard of Hearing in Europe, Pablo Romero-Fresco (ed), Berlin, Peter Lang: 262-263.

Burnham, Daniel, Greg Leigh, William Noble, Caroline Jones, Michael Tyler, Leonid Grebennikov and Alex Varley (2008) “Parameters in television captioning for the deaf and hard-of-hearing adults: Effects of caption rate versus text reduction on comprehension”, Journal of Deaf Studies and Deaf Education, 13, no. 3: 391-404.

Chaume, Frederic (2001) “Más allá de la lingüística textual: cohesión y coherencia en los textos audiovisuales y sus implicaciones en traducción” in La traducción para el doblaje y la subtitulación, Miguel Duro (ed.), Madrid, Cátedra: 65-82.

― (2004) Cine y traducción. Madrid, Cátedra.

Civera, Clara and Pilar Orero (2010) “Introducing icons in subtitles for the deaf and hard of hearing: Optimising reception?” in Listening to Subtitles. Subtitles for the Deaf and Hard of Hearing”, Anna Matamala and Pilar Orero (eds), Bern, Peter Lang: 59-68.

Collins, Karen and Peter J. Taillon (2012) “Visualized sound effect icons for improved multimedia accessibility: A pilot study”, Entertainment Computing, 3: 11–17.

De Linde, Zoe and Neil Kay (1999) The Semiotics of Subtitling. Manchester, St. Jerome Publishing.

Delabastita, Dirk (1990) “Translation and the mass media” in Translation, History and Culture, Susan Bassnett and André Lefevre (eds), London, Pinter Publishers.

Díaz-Cintas, Jorge and Aline Remael (2007) Audiovisual Translation: Subtitling. Manchester and Kinderhook, St Jerome Publishing.

Domínguez, Ana Belén and Jesús Alegría (2010) “Reading mechanisms in orally educated deaf adults”, Journal of Deaf Studies and Deaf Education, 15, no. 2: 136-148.

Domínguez, Ana Belén, Mª Soledad Carrillo, Mar Pérez Martín y Jesús Alegría (2014) “Analysis of reading strategies in deaf adults as a function of their language and meta-phonological skills”, Research in Developmental Disabilities, 35: 1439-1456.

D’Ydewalle, Géry van Outryve and Wim De Bruycker (2007) “Eye movements of children and adults while reading television subtitles”, European Psychologist, 12, no. 3: 196-205.

Fox, Wendy (2016) “Integrated Titles–An Improved Viewing Experience? A Comparative Eye Tracking Study on Pablo Romero Fresco’s Joining the Dots” in Eyetracking and Applied Linguistics I. Open Access Book Series “Translation and Natural Language Processing”, Silvia Hansen-Schirra, and Sambor Grucza (eds), Berlin, Language Science Press: 5-30.

Gambier, Yves (2003) “Screen transadaptation: perception and reception”, The Translator, 9, no. 2: 171-189.

Jensema, Carl, J., Ramalinga Sarina Danturghi and Robert Burch (2000) “Time spent viewing captions on television programs”, American Annals of the Deaf, 145: 464-468.

Kelly, Leonard (1996) “The interaction of syntactic competence and vocabulary during reading by deaf students”, Journal of Deaf Studies and Deaf Education, 1: 75-90.

Koolstra, Cees M., Tom H.A. Voort and Leo J. Th. Kamp (1997) “Television’s impact on children’s Reading comprehension and decoding skills: a 3-year panel study”, Reading Research Quarterly, 32, no. 2: 128-52.

Kruger, Jan-Louis, María T. Soto-Sanfiel, Setephen Doherty and Ronny Ibrahim (2016) “Towards a cognitive audiovisual translatology. Subtitles and embodied cognition” in Reembedding Translation Process Research, Ricardo Muñoz Martín (ed.), Amsterdam and Philadelphia, John Benjamins: 171-194.

Lambourne, Andrew (2006) “Subtitle respeaking: A new skill for a new age”, Intralinea, Special Issue: Respeaking,
URL: http://www.intralinea.org/specials/article/Subtitle_respeaking (accessed 15 November 2016).

Lorenzo, Lourdes (2010a) “Subtitling for the deaf and hard of hearing children in Spain: a case study” in Listening to Subtitles. Subtitles for the Deaf and Hard of Hearing, Anna Matamala and Pilar Orero (eds), Bern, Peter Lang: 115-138.

— (2010b) “Criteria for elaborating subtitles for deaf and hard of hearing children in Spain: A guide of good practice”, in Listening to Subtitles. Subtitles for the Deaf and Hard of Hearing”, Anna Matamala and Pilar Orero (eds), Bern, Peter Lang: 139-147.

Martí Ferriol, José Luis (2010) Cine independiente y traducción. Valencia, Tirant Lo Blanch.

McClarty, Rrebecca (2012) “Towards a multidisciplinary approach in creative subtitling”, MonTI, 4: 133-153.

— (2014) “In support of creative subtitling: contemporary context and theoretical framework”, Perspectives: Studies in Translatology, 22, no. 4: 592-606.

Neuman, Susan B. and Patricia Koskinen (1992) “Captioned television as comprehensible input: effects on incidental word learning from context for language minority students”, Reading Research Quarterly, 27, no. 1: 95-106.

Neves, Josélia (2005) Audiovisual Translation: Subtitling for the Deaf and Hard-of-Hearing, PhD diss., University of Surrey Roehampton, UK,
URL: http://roehampton.openrepository.com/roehampton/bitstream/10142/12580/1/neves audiovisual.pdf (accessed 15 November 2016).

― (2008) “10 fallacies about Subtitling for the d/Deaf and the hard of hearing”, JoSTrans, The Journal of Specialised Translation, 10: 128-143,
URL: http://www.jostrans.org/issue10/art_neves.php (accessed 15 November 2016).

― (2009) “Interlingual Subtitling for the Deaf and Hard-of-Hearing” in Audiovisual Translation: Language Transfer on Screen, Jorge Díaz-Cintas and Gunilla Anderman (eds), New York, Palgrave Macmillan: 151-169.

― (2010) “Music to my eyes… Conveying music in subtitling for the deaf and the hard of hearing” in Perspectives on Audiovisual Translation, Krysztof Kredens (ed.), Bern, Peter Lang: 123-146.

Perego, Elisa (2009) “The Codification of Nonverbal Information in Subtitled Texts” in New Trends in Audiovisual Translation, Jorge Díaz Cintas (ed.), New York: Multilingual Matters: 58-69.

Perego, Elisa, Fabio del Missier, Marco Porta and Mauro Mosconi (2010) “The Cognitive Effectiveness of Subtitle Processing”, Media Psychology, 13: 243-272.

Romero-Fresco, Pablo (2010) “Comprehension and reading patterns of respoken subtitles for the news” in New Insights into Audiovisual Translation and Media Accessibility: Media for All 2, Jorge Díaz-Cintas, Anna Matamala, and Josélia Neves (eds), Amsterdam and New York, Rodopi: 175-194.

— (ed.), (2015) The Reception of Subtitles for the Deaf and Hard of Hearing in Europe. Berlin, Peter Lang.

— (forthcoming) “Accessible Filmmaking–Translation and Accessibility from Production” in The Routledge Handbook of Audiovisual Translation Studies, Luis Pérez-González (ed.), London, Routledge.

Szarkowska, Agnieszka (2010) “Accessibility to the media by DHH audiences in Poland: problems, paradoxes, perspectives” in New Insights into Audiovisual Translation and Media Accessibility: Media for All 2, Jorge Díaz-Cintas, Anna Matamala, and Josélia Neves (eds), Amsterdam and New York, Rodopi: 139-158.

Tamayo, Ana (2015) Estudio descriptivo y experimental de la subtitulación en TV para niños sordos. Una propuesta alternativa, PhD diss., Universitat Jaume I, Spain.

― (2016) “Reading speed in subtitling for DHH children: an analysis in Spanish television”, JoSTrans, The Journal of Specialised Translation, 26: 275-294.
URL: http://www.jostrans.org/issue26/art_tamayo.pdf (accessed 15 November 2016).

Tamayo, Ana and Frederic Chaume (2016) “Los códigos de significación del texto audiovisual: Implicaciones en la traducción para doblaje, la subtitulación y la accesibilidad” Linguae–Revista de la Sociedad Española de Lenguas Modernas, 3: 49-83.

Zárate, Soledad (2010a) “Bridging the gap between Deaf Studies and AVT for Deaf children” in New Insights into Audiovisual Translation and Media Accessibility: Media for All 2, Jorge Díaz-Cintas, Anna Matamala, and Josélia Neves (eds), Amsterdam and New York, Rodopi: 159-173.

― (2010b) “Subtitling for deaf children” in Perspectives on Audiovisual Translation, Łukasz Bogucky and Krzysztof Kredens (eds), Frankfurt am Main, Peter Lang: 107-122.

― (2014) “Word recognition and content comprehension of subtitles for television by deaf children”, JoSTrans, The Journal of Specialised Translation, 21: 133-152.
URL: http://www.jostrans.org/issue21/art_zarate.pdf (accessed 15 November 2016).

Filmography

The Wonder Years (Carol Black and Neal Marlens 1988-1993)

The Fault in our Stars (Josh Boone 2014)

Inside Out (Pete Docter and Ronnie Del Carmen 2015)

Lilo & Stitch: The Series (Dean DeBlois and Chris Sanders 2003-2006)

Pocoyó (David Cantolla, Alfonso Rodríguez and Guillermo García Carsí 2004-2010)

Notes

This article was written thanks to a three-month research stay in Escuela de Idiomas, Traducción e Interpretación at Universidad César Vallejo (Lima, Peru).

About the author(s)

Ana Tamayo is currently a full-time lecturer and researcher in the Department of English and German Philolgy and Translation and Interpreting at the University of the Basque Country (UPV/EHU), where she teaches translation and interpreting from English into Spanish. Her research focuses on audiovisual accessibility, more specifically on the study of subtitling for the D/deaf and the hard-of-hearing (SDH). Her PhD thesis (2015) analyses the SDH for children in Spanish television and presents an alternative subtitling according to the needs of children with hearing impairment. She is a member of TRALIMA Consolidated Research Group (UPV/EHU, GIU16/48) and TRAMA Research Group (code: 200) at Universitat Jaume I; and a member of projects IDENTITRA (MINECO, Spanish Ministry of Economy, Industry and Competitiveness, FFI2015-68572-P, G15/P75) and ÍTACA (MINECO, FFI2016-76054-P).

Email: [please login or register to view author's email address]

©inTRAlinea & Ana Tamayo (2017).
"Signifying codes of audiovisual products: Implications in subtitling for the D/deaf and the hard of hearing"
inTRAlinea Special Issue: Building Bridges between Film Studies and Translation Studies
Edited by: Juan José Martínez Sierra & Beatriz Cerezo Merchán
This article can be freely reproduced under Creative Commons License.
Stable URL: https://www.intralinea.org/specials/article/2249

Go to top of page