Playing Cinematics:

Traditional AVT Modes in a New Audiovisual Landscape

By Gianna Tarquini (University of Bologna, Italy)

Abstract

As the gaming industry is increasingly challenging Hollywood's primacy on a global scale, the audiovisual transfer of software components becomes a key issue. In this respect, while dubbing, subtitling and other audiovisual translation (hereafter AVT) modalities for cinema and television products have been established over decades of practice and polished by specialised research, studies on video game translation are still in their infancy and are being developed against a background of non-standard industry-driven practices. This contribution aims at describing new screen translation modes that are emerging in the localisation of video games by contrasting them to the AVT framework. The underlying question is not only how video games are translated, dubbed and subtitled but, more specifically, how these practices can be re-contextualised and pinned down in a new digital scenario. In order to answer this question, we will retrace the main development stages of the video game medium and chart its specific features in relation to other media, in particular to cinema. The long-established AVT modes will then be re-defined and compared to a new audiovisual landscape, in the light of specific media and operational considerations.

Keywords: Video Game Localisation, audiovisual translation, dubbing, subtitling

©inTRAlinea & Gianna Tarquini (2014).
"Playing Cinematics: Traditional AVT Modes in a New Audiovisual Landscape"
inTRAlinea Special Issue: Across Screens Across Boundaries
Edited by: Rosa Maria Bollettieri Bosinelli, Elena Di Giovanni & Linda Rossato
This article can be freely reproduced under Creative Commons License.
Stable URL: https://www.intralinea.org/specials/article/2068

1. Introduction

Video games look increasingly cinematographic. The boundaries between cinema movies and video games are becoming more and more blurred. Movies that replicate the layered structure of video games or mirror their aesthetic features are becoming increasingly more common and, in return, video games are increasingly using cinematic narrative expedients. Incorporating 3D graphics, stunning full motion picture sequences and immersive storylines, new generation video games combine a plurality of codes such as image motion, music, sound effects and spoken language — all features that used to be the preserve of motion pictures — while at the same time offering the end user customised, configurative and participative experiences. In parallel, as a popular form of entertainment worldwide, video games are increasingly concerned with cross-cultural transfer and AVT modalities such as dubbing and subtitling. These practices have been honed and standardised in the film and TV industry over decades and are familiar to the general public thanks to the pervasiveness of cinema’s big screens, TV sets and DVD home video devices. However, when we approach the computer game screen, we come across unfamiliar – if not awkward – conventions: subtitles that consist of full verbatim transcriptions of the dialogue, running at an excessively high reading speed, and sometimes displayed in three lines or in changing colours; captions that lag far behind or run ahead of speech during game play; audio/video menus that feature voiceover options which eventually turn out to refer to dubbing. It is fair to assume that video game dubbing and subtitling are something other than the long-established AVT modes we are familiar with.

Instead of looking through the “rear-view mirror” (McLuhan and Fiore, 1967: 73), by adopting the AVT paradigm and trying to make it fit into a new setting – that of a distinct medium and a specialised digital industry – we will try to investigate the matrices of such idiosyncrasies. To this purpose, we will firstly shed light on the emergence and the fundamental constitutional features of the video game medium, with a focus on the affinities and differences in relation to other media, and, more specifically, to cinema. Interestingly, while film studies have explored cinema extensively, and while video games are the object of inquiry of the emerging discipline of game studies[1], the two have rarely converged in parallel aesthetic examinations. A historical overview of the interplay between these two media will be followed by brief considerations on the digital, interactive and semiotic specificities of the game medium, which overshadow sui generis translation constraints. Attention will then be shifted to operational processes, and a cursory glance will be made at the international business model of video game localisation so as to contextualise game dubbing and subtitling as niche activities within a global scenario of digital information flaws. Finally, the main game audio localisation modalities will be mapped and contrasted with established AVT standards and processes, and it will become clear that there is a need for both greater autonomy and greater integration.

2. Video games and cinema: a historical interplay

The whole aesthetic life of the world developed itself in these five expressions of Art [Music, Poetry, Agriculture, Sculpture and Painting]. Assuredly, a sixth artistic manifestation seems to us now absurd and even unthinkable; for thousands of years, in fact, no people have been capable of conceiving it. But we are witnessing the birth of a sixth art. [...] It will be a superb conciliation of the Rhythms of Space (the Plastic Arts) and the Rhythms of Time (Music and Poetry). (Canudo, [1911] 2002: 19)

The literature on the interplay between cinema and video games is heterogeneous and ranges from media studies to production models, and from sociology to audio engineering: (Greenfield, 1991; Bukatman, 1993; Sandin, 1998; Manovich, 1999; Bittanti, 2002, 2008; Sotamaa, 2007; Grimshaw, 2008; Blanchett, 2009, among others). As an in-depth multidisciplinary analysis of the analogies and dissimilarities between the two media would transcend the scope of the present discussion, we will highlight the most significant aspects pertaining in particular to film studies, game studies and the AVT background framework. Since “no medium has its meaning or existence alone, but only in constant interplay with other media” (McLuhan, 1964: 26), the historical emergence of entertainment software will provide a thread to trace the unique constituent features of the medium and to make relevant comparisons with cinematography in its most significant converging and diverging aspects. At the same time, looking at the evolution of games helps to shed light on their complex and changing nature, which is still a matter of debate among game studies scholars. Due to the variety of genres, experiences, forms and technologies that have evolved, there is no consensus about the naming and delimitation of video games as a scientific object of study (Newman, 2005). In this discussion, we will opt for the term video games, games tout court and entertainment software as synonyms of video games, and will borrow Salen and Zimmerman’s definition of the medium as: “an [electronic] system in which players engage in an artificial conflict, defined by rules, that results in a quantifiable outcome” (2004: 80).

The genesis of the video game medium is a history of dialectic, remediation[2] and osmosis in relation to its audiovisual predecessor(s). As poignantly illustrated by a number of media scholars, video games do not simply spawn from cinema, as an ancillary derivation, but have actually developed in parallel as a different achievement of converging artistic aspirations (Sandin, 1998; Manovich, 1999; Bittanti, 2002). Their common matrix can be traced back in the cultural turmoil in the half of the XIX century, and in particular in Wagner’s theorisation of opera as the “total artwork of the future” which brings together music, poetry, painting and plastic arts in a unique totalizing experience. This was supposed to stem not from a “single genius”, but from a collective “genius of community”, a group of artists led by the performer (1849). Nowadays, the nexus between games and theatre is further emphasized by the labelling of video games as a form of “interactive drama”, in which users enter the stage to perform actions that unfold in a dramatic fashion (Laurel, 1991).

The argument of a common artistic inspiration, which has converged and diverged in a variety of manifestations over history, is further supported by interesting parallels with early cinema. According to Manovich, proto-cinematic devices such as the kinetoscope, the mutoscope and the cinematograph emphasized image motion and manipulation over narration: “cinema was understood, from its birth, as the art of motion, the art which finally succeeded in creating a convincing illusion of dynamic reality” (1999: 175). In fact, early motion picture machines could be controlled manually and were customized for a single viewer (or, as we would say nowadays, “user”), thus overshadowing interactivity in their silent and black-and-white rudimentary technology. The mutoscope, in particular, patented a few years after Edison’s kinetoscope, was provided with a crank that enabled the user to scroll manually a sequence of photographic frames attached on a drum and watch them moving through the lens of a hood. Unlike the kinetoscope, it allowed the user greater control over the moving images which could be fast forwarded and replayed, thus “subverting the linear model of fruition that eventually became the paradigmatic form of film literature” (Bittanti, 2002: 14). These trends towards user customisation and direct manipulation were abandoned by the mainstream Hollywood model in the 1920s and were enabled only later with the advent of video games and DVD technologies. A famous case in point is Heilig’s Sensorama, an immersive multi-sensorial arcade attraction that failed to catch on partly because the technologies current at the time were not able to support it. An aesthetic analogy with primitive movies, unsurprisingly called “cinema of attractions”, and (early) video games is the primacy of spectacle over storytelling: “the earliest publicly released silent films were often short, sensationalistic "special effects", such as a train driving straight toward the audience. That startling effect was produced by introducing a perceptual modality that had not been experienced before in the theatrical context” (Sandin, 1998: 3).

These artistic drives materialised also in the creation of Spacewar!, the first modern video game, which took shape in the computer culture environment. Developed in 1961 by Steve Russel and other MIT (Massachusetts Institute of Technology) students, it was an attempt to stage the adventures of the Skylark space saga by Edward Elmer Smith. The game featured a spaceship battling in a wild science fiction scenario. It is recounted that the inventors of Spacewar! were sci-fi enthusiasts, but since they were hackers instead of film makers, they used a computer to unleash their imagination (Lowood, 2008). As in the cinema of attractions, spectacle was paramount. In fact, while the Lumière brothers' early movie, L'Arrivée d'un Train en Gare de la Ciotat (1895), showed a train — the symbol of the Modern Age — Steve Russell’s Spacewar! staged a spaceship — the emblem of the Space Age: “in both cases, new visual techniques were used as a display window, a spectacle device for other technologies” (Bittanti, 2002: 23).

Despite this parallel path, the emergence of a new medium highlights their very dissimilar constituent features. Firstly, Spacewar! was not a movie, but a ludic experience. Video games relate as much to cinema as to ludology. The entertainment factor in video games relies on visual illusion as well as on game play, interaction and performance. Hence, as will be further argued in section 2.1, the spectator becomes a performer, a player willing to demonstrate their pragmatic and strategic skills. Secondly, video games took off within the computer culture and lead the digital revolution over the years. Albeit inspired by science-fiction literature, Steve Russel and the other MIT students were skilled computer programmers who wanted to demonstrate possible new uses of computing machines, and Spacewar! was concretely a programme intended to show this potential.

The following developments of video games and cinema are characterised by an accelerated mutual influence, favoured by the heyday of information technology and (global) mass culture communication. Video games made their official appearance in the cultural galaxy of popular entertainment in the 70s, at a time when film audiences and the hegemonic model of Hollywood’s establishment had come to a standstill for reasons not yet fully explained, and joined VCR, cable and satellite TV in challenging the primacy of cinema (Bittanti, 2002: 1). The most popular titles included the sports game Pong (1972) by Atari, the space shooting games Space Invaders (1978) by Taito and Atari’s Asteroids (1979), and the icon of 80s pop culture Pac-man (1980). These were deployed in arcade platforms and in the first home video gaming consoles, like the historic Magnavox Odyssey, Atari 2600, and later Commodore 64, systems.

It is important to stress the fact that video games as we know them today have come a long way since their beginnings, despite retaining core features. The graphics of early games represented 2D objects through simple lines and dots in a single screen spatial configuration (Wolf, 2001). By comparison, today’s virtual worlds display breathtaking 3D sceneries and a stunning cosmology of fictional characters and adventures thanks to the introduction of improved game engines. These provide a set of tools that allow programmers to manage 2 and 3D graphics, artificial intelligence, physics, scripting and other features that ensure life-like parallel universes. The sonic palette in the first games was limited to monophonic beeps while modern audio engineering techniques provide advanced speech synthesis, hundreds of simultaneous 2D and 3D sounds, in addition to digital signal processors (Grimshaw and Schott, 2007). As far as translation is concerned, early games contained only a few commands, whereas modern games have progressively incorporated extensive instructions, colourful item and character descriptions as well as dialogue. Dubbing and subtitling are therefore a relatively new feature, deployed only by PCs, arcade platforms and more powerful consoles, like Sony Playstation, Nintendo Wii and Microsoft Xbox.

As computer graphics became increasingly realistic and video games won favour with the general public, games soon started to contaminate filmmaking, through new filming techniques or direct spin-offs (Super Mario, Pokémon and the Tomb Raider series, to quote a few). The introduction of DVDs, then, has further reduced the distance between the two media, giving the spectator a limited margin of interaction with content, and has affected the traditional cinema fruition model. As poignantly illustrated by media scholar Bittanti, video games have not only exerted a growing influence on the aesthetics of movies, with particular regard to science-fiction and animation, but have determined the emergence of a new film genre, called the technoludic film (2002). On the other hand, video games have progressively absorbed the audiovisual and narrative language of movies, incorporating sophisticated storylines ─ especially in story-based game genres ─ and full motion picture sequences (called cinematics or cut-scenes). The latter in particular were introduced at the end of the 1990s thanks to new storage devices such as CD-ROMs and improved computer processing power and memory.

As a conclusion to this historical excursus, it is worth noting that video games are not only deeply interwoven with Hollywood’s paradigms, but are also related to Japanese manga and anime culture. As historic Japanese publishers (Taito and Atari, later acquired by the French, as well as Nintendo and Konami, among others) challenge the primacy of European and American corporations, the influence of Japanese culture cannot be underestimated, especially in genres like role-playing games (Mangiron and O'Hagan, 2006).

2.1 Interactive storytelling

Despite the historical dialectic of video games and cinema and their blurring boundaries, it should be borne in mind that video games were brought to life within the universe of information technology and have since evolved as a computer-based expression of the modern technological era (Lowood, 2008). They do not rely less on programming algorithms, graphical design, artificial intelligence and functionality than on fictional and audiovisual texture. From a material point of view, video games are a piece of software and rely heavily on interaction: “the experience of manipulating elements within a responsive, rule-driven world, is still the raison d'être of games, perhaps the primary phenomenological feature that uniquely identifies the computer game as a medium” (Mateas and Stern, 2006: 643). As already noted, early technologies (like the mutoscope and the Sensorama) expressed the common desire to empower the spectator through the aid of interaction, long before the advent of video games and film DVDs: “through various stages over the last hundred years or so, these media have been physically approaching closer and closer to their audience, and gradually engulfing them, enfolding their senses in a digital environment” (Wolf, 2000: 206). There is, however, a remarkable difference between the interactive features of DVD and those of video games. In a DVD they allow the user loose control over the linear unfolding of the plot or access to the configuration options, including dubbing tracks and subtitles. In a video game they empower the user to manipulate the narratives and step inside the virtual world in a first-person sensorial experience[3].

Interactivity seems to be at odds with storytelling. On a narrative and textual level, video games have been framed within the context of “ergodic literature”, in which “non trivial effort is required to allow the reader to traverse the text” and the stakes raised by interpretation are bound up with physical intervention (Aarseth, 1997: 1). The concept of “cybertext” does not only focus on computer-based textuality, but places the mechanical features of the medium and the user at the heart of the literary exchange (ibid). In ergodic literature, semiotic interpretation is accompanied by the physical construction of meaning. Video games not only offer more or less elaborated storylines, but encompass configurative and non-linear spatial experiences, allowing the gamer to explore new areas or perform actions without any narrative progression (Newman, 2005). By contrast, despite montage and flashback techniques, fast forward and replay options, and the semiotic re-constructions taking place in the mental space of the viewer, movies are mostly conceived as linear stories. This emerges clearly when we look at the foretext: while movie scripts are linear, most video game assets are fragmented and non-linear, except manuals and cinematic dialogue. Textual clusters must in fact be programmed so as to respond to user's interaction and are then picked up by the game system at run-time, according to the user's input. This adds a further problematic dimension during the translation stage, since linguistic segments are not only transfigured by software engineering, but they are also fragmented and non-linear, thus demanding considerable interpretive effort.

A core feature in game narration is the cut-scene, defined as “any non-interactive storytelling or scene-setting element of a game” (Hancock, 2002). Accordingly, cut-scenes are usually responsible for the narrative framing of the game storyline, they introduce characters and settings, thus also contributing by giving hints to the player. Their popularity is due to blockbuster titles that have extensively exploited them as a narrative and aesthetic device, such as Metal Gear Solid 2 and Final Fantasy followed by successful sequels. As a full motion picture insert, cut-scenes do not support user interaction but represent a linear intermission in a third-person perspective that allows cuts and fades, close-ups, long shots and other conventional filming techniques.[4] Cut-scenes are usually provided with options allowing the user to skip them, especially when immersed in the game play sequences. Although appealing, and animated with the utmost care, detractors tend to disregard them as a non-core element to the purposes of game agency and completion. This argument may provide possible explanations as to the quality level required (by producers) for cut-scene dubs and subtitles, especially in translated versions. By contrast, in-game dialogue occurs within game play and is activated by the game engine on the fly. As such, it is designed to accommodate non-linear textual patterns and segmented speech lines. This, as we will see, entails important aesthetic, technical and translation consequences in relation to cinematic dialogue.

2.2 Audiovisual dimensions and beyond

Undoubtedly, audiovisual language is what binds videogames to cinema most. As “technologies of illusion”, both media aspire to “represent, reinvent, and redefine reality for commercial and artistic purposes. They seek to create compelling fictional situations that engross the audience. They fashion visually shareable, but otherworldly alternative spaces” (Bittanti, 2002: 11). The complex architecture of the audiovisual text has been widely investigated in film studies, semiotics and AVT literature (Metz, 1972; Eco, 1975; Delabastita, 1989; Bollettieri Bosinelli, 1994; Barthes, 1997, among others). The audiovisual text is generally intended as a multi-layered semiotic construct that conveys a set of codes via two channels: acoustic, through sound waves, and visual, through light signals (Chaume Varela, 2004). Its complexity relies on the rich combination of visual elements (including linguistic, kinesic, iconic, photographic, montage, motion codes etc.) and acoustic elements (comprising linguistic and paralinguistic codes, music, sound effects, background noise etc.), the final meaning being not the sum of these elements but the conflation of them. As a result, the main difficulty for AVT translators is to transfer a multi-semiotic blend of messages operating only on the (spoken or written) verbal code. The same fundamental multimedia features are also found in (modern) video game semiotic texture, where similar visual and acoustic codes interweave in a rich variety of synesthetic patterns.

Nevertheless, all visual and aural dimensions are digitized and to a certain extent transfigured in the game cyberspace. Indeed, both cinema (as well as other audiovisual media) and video games are representational forms that aspire to imitate reality, or at least present credible virtual worlds. The main difference is that each medium uses its own production tools and techniques: (mostly and traditionally) analogical the former, and digital/computer-based the latter. As far as video game visuals are concerned, instead of visual codes, we could refer to graphic codes, since “full motion picture is actually 3D or 2D animation, photographs become screenshots, static images are a product of graphic design, and written language is displayed in a colourful and dynamic set of pixels” (Tarquini, 2010: 4). For example, gestures, mimicry and other kinesic codes that accompany verbal expression are highly stylised in games, and sometime user-driven, thus becoming less relevant for non-verbal communication purposes. Furthermore, video games feature non-diegetic elements (instructions, technical content, icons and other pictorial elements) as well as diegetic text (descriptions, narratives) that are displayed in a more or less typical computer screen layout, called graphical user interface (GUI) (Järvi, 1997).[5] Almost alien to film aesthetics, such linguistic/iconic elements belong to the multimedia architecture of electronic media (including CD-ROMS, web sites, business applications and the like), although game content is designed to be appealing and entertaining.

Another fundamental difference with the audiovisual text is that video games incorporate a third channel as an integral part of their multimedia system: the haptic (touch-related) channel (Ensslin, 2010). This transmits infra-red signals, electric impulses and digital information that is hard to frame within the context of human communication. It regards iconic or written interface elements, such as navigating menus, clickable maps, onscreen buttons and commands that activate complex human-machine interaction cycles of tactile inputs and audiovisual outputs. These are complemented with non-diegetic sounds, called “auditory icons” (Grimshaw and Schott, 2007: 475) that are part of a specific aural sub-code pertaining to interaction signals, like error and selection sounds. In addition to interface elements, interactivity also affects other audiovisual codes and in particular how they are created or received. For example, montage has little prominence during game play, because the user can switch to first-person camera angle, explore any corner designed by developers, zoom the view in and out and configure a vast array of visual settings. As far as aural codes are concerned, a dominant concept in game audio engineering is “acoustic ecology” (Grimshaw and Schott, 476-7). This term presupposes a dynamic web of interactions occurring between the playing character and other characters and their responses to the game engine sounds, in a 3D space that is neither fixed nor static. In fact, game sounds are supposed to account for the dynamic positioning of the playing character in relation to the acoustic source. For instance, if the playing character walks away from the ringing bells of a cathedral, the game engine detects their positioning and decreases the volume until it fades. This relational sound framework plays a strategic role in games, since approaching voices and sounds work as acoustic cues when the source is not visible.

3. Manufacturing and translating fun

According to recent research, 469 movies have been adapted to one or more video game versions from 1975 to 2008, amounting to 10 per cent of total released titles (Blanchett, 2009). On the other hand, 53 “technoludic” movies have incorporated video games as a theme or as a narrative technique from 1976 to 2001 (Bittanti, 2002). The striking fact is that very few professionals have worked on the same movie and game project, apart from franchisers. Despite common roots and inspiration, the gaming industry has largely developed as an autonomous segment within the entertainment sector. Undoubtedly, video game production requires specific technical skills, including programming, design, engineering and testing. However, game development teams increasingly include specialists such as cinematographic artists, screenplay writers, musicians, sound engineers as well as dubbing directors and actors (source game versions are in fact the first to need the voicing of virtual characters). The general impression is that, while responding to the unique needs of game programming, professionals have developed ad hoc audiovisual practices that are non-standard and partly disconnected from the expertise of the film industry.

By comparison, the video game industry shares more similarities with the utility software industry, often framed within the GILT paradigm (Globalisation, Internationalisation, Localisation and Translation); the global business model and the modi operandi of which are the object of an extensive professional and academic literature (O’Hagan, 1996; Esselink, 2000; Schäler, 2003; Pym, 2004 among others, as well as professional journals[6]). Both software and game sectors are concerned with the development and the adaptation of digital content for global markets (called locales) and the management of related linguistic, cultural, technical and legal issues. Another comman factor lies in functionality features and in the combination of natural language with software engineering, though, of course, the game medium pursues unique ludic and aesthetic purposes.

Game localisation consequently emerges as a specialised sector providing a full range of services to the global interactive game publishing industry[7]. Among the main differences between traditional AVT and game or software translation in terms of its operational processes and trends, we can identify firstly an integrated business model that ensures a growing cooperation between the major international stakeholders ─ including developers, publishers, hardware manufacturers, distributors as well as localisation service providers (also called language vendors, LV), dealing with outsourced localisations. This organisational model involves developing content with foreign users in mind, initiating translation during early development and also managing simultaneous localisation projects across international requirements, teams and people. In particular, software internationalisation entails developing localisation-friendly code, therefore accounting for the support of foreign characters, the design of resizable user interfaces, and, in general, the separation of programming elements from linguistic elements, which will be extracted for translation purposes (Schäler, 2003). At the same time, internationalising games calls for cultural sensitivity, since what is funny in one culture may not be equally funny in another culture, or even potentially offensive (Edwards, 2011). Furthermore, owing to the complexity of the materials and the tasks involved, the management of game localisation is concerned with the project management framework, providing a set of techniques for the planning, management and control of processes, human resources, costs, communications and schedules from project initiation to closure (Mantel et al., 2001; Project Management Institute, 2004).

It should be noted that, unlike film translation processes, game localisation workflows are extremely flexible due to the fluidity the medium in terms of technical features, software components, file formats, game genres and platforms, and partly to varying development and business plans. For instance, international publishers can opt for partial localisations, involving the translation of the onscreen text only, or for subtitling in order to cut costs.[8] In the end, each project is quite unique in its creative features, technical specifications and required quality or adaptation level, sometimes entailing major graphics or storyline re-working.

3.1 Managing audiovisual assets: games in the dubbing studio

In the light of dissimilar media specificities, organisational models and operational workflows, AVT modes for video games emerge as sui generis practices managed by highly specialised professionals within the game localisation industry. Therefore, instead of audiovisual translation in a broad sense, in this context we will refer to video game audiovisual localisation[9] (AVL). The autonomy of this niche sector in relation to the mainstream AVT industry is supported by the evidence that game localisation vendors are often equipped with their own dubbing studios so as to respond to the unique needs of game audio production within wider projects involving the adaptation of a variety of assets, as highlighted in the previous section. In the Italian context, for instance, major game localisation companies are based in Milan, while traditional dubbing studios are located in Rome. Before comparing the mainstream AVT and AVL modalities, it can be useful to explain how a game's audiovisual assets (cut-scenes and interactive in-game dialogue) are pre-arranged and managed, their respective audiovisual features having being explored under different angles throughout the development.

Firstly, all content that undergoes the subtitling and/or dubbing process is highly digitised, including visuals, dialogues and the script. In-game and cinematic dialogues are, in fact, organised into an electronic spreadsheet (Figure 1) where each speech line, contained in a separate cell, refers to either subtitles or spoken dialogue, without any distinction made. Basically, the two techniques are blurred in the preliminary translation stage, which is often carried out without audiovisual materials or references. As will be pointed out in the next section in a more detailed description of each modality, AVL excludes intersemiotic transfer, all subtitles being translated from a single list of dialogues that is also used for dubbing, (if required). In any case, game translators are not usually concerned with timing, but with loose space constraints, regarding the number of characters per line and the total (Excel) cell size of the original speech. Further dialogue synchronisation and adaptation is performed in the dubbing studio by adaptors, dubbing directors and audio engineers with the aid of audiovisual references, and finally verified by testers who can work with the complete audiovisual version.

Figure 1: Electronic list of dialogues

This screenshot shows a prototypical dialogue database for dubbing, in its bilingual English-Italian version. The electronic script is provided with filter options which allow assets, characters and speech lines to be sorted, using the appropriate drop-down arrows. As illustrated in this picture, cut-scene dialogue is listed in a linear order (FMV 1 stands for full motion video 1), while in-game lines would be displayed in a random order. Although the first lines refer to the monologue of a voice-off narrator (whose name is Caretaker), they are segmented into seven parts. Game dialogue, in fact, is fragmented into single speech lines and audio files, which are countersigned by a file name written in alphanumeric characters (fourth column on the left). This feature is essential and must be checked carefully, since the engineers who will integrate the audio assets into the game code will simply replace the source file with the target file. For instance, the English audio file “CareFMV111_0001” will be replaced by “CareFMV111_0001_IT” in the Italian localised version. This ad hoc asset configuration allows dubbing actors to be recorded separately, managing flexible shifts.

In the next section, emerging game AVL practices will be examined in further detail and compared to AVT standard definitions and norms.

3.2 Audiovisual Translation versus Audiovisual Localisation

Besides changing their core features, AVT modalities change their names in AVL, for the terminology does not strictly overlap. In the gaming industry, the term voiceover is referred to as “any spoken dialogue” in the original or localised version (Chandler and O’Malley Deming, 2011: 207) and is intended more as a translation object rather than a translation mode. By comparison, in the AVT industry voiceover is a modality that involves:

reducing the volume of the original soundtrack completely, or to a minimal auditory level, in order to ensure that the translation, which is superimposed on the original soundtrack, can be easily heard. It is common practice to allow a few seconds of the original speech before reducing the volume and superimposing the translation. The reading of the translation finishes a few seconds before the end of the original speech. (Diaz Cintas, 2003: 195).

Drawing on the background AVT framework and on the hints provided by theoretical/descriptive studies on game localisation (Bernal Merino, 2008; Chandler and O’Malley Deming, 2011; Crossignani, 2011; Mangiron, 2012; Minazzi, 2007; Shirley, 2011) as well as on the empirical analysis of a body of materials collected by the author, we suggest below a comparative overview of the definitions given to the main AVT and AVL modes:

DESIGNATION

AVT MODES

AVL

Subtitling

“the rendering in a different language of verbal messages in filmic media, in the shape of one or more lines of written text presented on the screen in sync with the original message” (Gottlieb, 2001: 87)

“involves displaying words on the screen that correspond to voiceover dialogue”. (Chandler and O’Malley Deming, 2011: 207)

Dubbing

“involves replacing the original soundtrack containing the actor's dialogue with a TL recording that reproduces the original message, while at the same time ensuring that the TL sounds and the actor's lip movements are more or less synchronised.” (Diaz Cintas, 2003: 195)

More broadly referred to as voiceover localisation, involves replacing original dialogue files with TL recordings of script translations, while ensuring time synchronisation and lip-syncing when required.

Voiceover

See above definition

“any spoken dialogue heard in a game”

(Chandler and O’Malley Deming, 2011: 207)

Narration

“more or less summarised but faithful scripted rendition of the original.” (Perez Gonzalez, 2003: 13)

Ø

Free commentary
(on the spot)

“adapting the source speech to meet the needs of the target audience, rather than attempting to convey its contents faithfully.” (ibid.)

Ø

Table 1: film AVT modes vs. game AVL

From this general overview, it is apparent that traditional AVT modes acquire a new meaning and a new light in video games, and they need to be re-contextualised beyond standard AVT norms and conventions. Voiceover (in the AVT sense), narration and free commentary are not applied in game localisation. Considering the aesthetic purposes of the medium, it is evident that voiceover in the AVT sense would spoil the involvement of the user, not to mention narration and free commentary that are mostly used for television programmes. As character synchrony and voice pitch and modulation (Fodor, 1976: 72) are an integral part of the appeal of games, the fun element is maintained, for instance,if the original dialogue is kept with translated subtitles, but it would be compromised if a disembodied voice was used. Furthermore, since the source dialogue track is chopped up into single audio files, often subject to interactive fragmentation, revoicing modes other than dubbing are not allowed by the medium affordance.

Game subtitling is essentially a verbatim transcription of spoken dialogue, the same script being used for speech and written text. This clearly means that subtitles tend to run very quickly and do not abide by standard reading times. In particular, interlingual subtitling presupposes that the original list of dialogues has been fully translated and revoiced, if required. In the latter case, subtitles and dialogues are supposed to match up not only in terms of duration, but also of word-for-word transcription. These requirements determine sui generis (and varying) time and space restrictions which flout traditional AVT norms. First of all, game subtitling is not a form of intersemiotic translation. Furthermore, it does not use the substantial condensation strategies of AVT, where they are thought to reduce the source dialogue by 20-50 per cent or more, depending on the language pair (Gottlieb, 1994). As already noted, since translators work blindfolded in a preliminary script rendering, they are supposed to keep the original text length, with respect to single line and total subtitle length, which is not standardised in the source text in first place. Apart from loose space restrictions, translators are not subject to standard norms regarding the maximum number of characters per line and per subtitle, line/phrase segmentation as well as exposure time and frame transition (Mangiron, 2012).

Timing is usually managed by localisers in the production and post-production phase and is particularly problematic in the case of in-game dialogue, which is picked up by the game engine at run time. Thus, while frames and cueing times are somewhat blurred, game play accelerations cause the subtitles to run too quickly or out of sync. By contrast, pre-rendered cinematics retain motion picture linearity and montage. In terms of visual display, game subtitles are codified in pixels and tend to use different colours or hyphens to indicate the speaking character. From a technical point of view, experts recommend that both the game engine and pre-rendered movie animation support the subtitling functionality, since ex-post interventions are extremely time consuming. However, as most source games include intralingual subtitles in addition to spoken dialogue, interlingual subtitling without voiceover localisation is a valuable cost-saving option for international publishers (Chandler and O’Malley Deming, 2011). Major international titles, such as the Grand Theft Auto series, have been subtitled in the main European languages keeping the original English audio, and yet have been very successful. Finally, it is worth noting that “verbal messages in filmic media” (Gottlieb, 2001), also referred to as “visual-verbal” elements (Delabastita, 1989) including signs, newspaper headlines, letters and banners, can all be graphically manipulated in video games (if required), especially when they are superimposed on double-layered 2D images. Accordingly, visual-verbal elements are not translated within the subtitles, but extracted from graphic files, translated apart in the artwork localisation assets, and finally placed in their exact onscreen position.

Game dubbing is broadly intended as the same translation modality of film dubbing, although the process is highly digitised and source dialogue is usually fragmented into single audio files that replace film reels and loops (Figure 1). Indeed, amongst professionals, there is no consensus on the terms audio production (Crossignani, 2011; Shirley, 2011), audio localisation, voiceover localisation and dubbing (Minazzi, 2007; Chandler and O’Malley Deming, 2011), as they are partially overlapping concepts. The first term broadly refers to the production of source dialogue (called “voiceover”), which needs to be voiced in first place, and often to the simultaneous localisation into different target languages. The term dubbing is more often associated with the idea of translated “voiceover” and synchronisation, while the audio/voiceover process as a whole involves translating and adapting audio files that do not necessarily require time and/or lip synchronisation, especially in the case of in-game sequences. According to Jason Shirley (2011), lead audio producer at Microsoft, the typical specifications of an audio localisation project involve: 20,000 lines of dialogue of which 1,000 need to match the source length, 70,000 words, 40 actors, 85 characters and 11 days of recording and 36 days of post production. Typical technical requirements include 15 different effects processes, 10 cinematic mixes and 48 kHz, mono, 16 bit, and .wav audio files. This could be multiplied per x target locales in the case of multiple audio localisations, which is an expensive but strategic decision for publishers willing to show their commitment in providing the best quality gaming experience for foreign customers (Chandler, 2011). As video games can offer many hours of enjoyment, without considering replay sessions, game scripts tend to be much longer than films’. Professionals respond to these challenges with flexible shifts and voice synthesis and modification, enabling the same actor to voice two or more game characters.

Audio localisation projects outsourced to specialised localisation vendors are usually organised into three basic steps: pre-production, production and post-production. The initial pre-production stage requires a careful planning and asset pre-arrangement and involves gathering audio/video and technical specifications, planning human resources and schedules for the whole project duration, casting actors, translating and reviewing scripts (usually in a word-for-word rendition) and confirming voice talent as well as studio bookings. Then, the production process involves the actual recording and audiovisual adaptation as specified in pre-production, also including lip-sync (if required) and, in some cases, the recreation of visual material for foreign locales. Publishers also make sure that the production process is carried out in-country in order assure native speech quality. Finally, the post-production stage entails a variety of technical interventions using advanced audio/video techniques that can be implemented by the localisation vendor, the publisher or both. The main processes involve editing voice recordings; processing audio files through a wide range of techniques (cleaning, levelling and file-naming, manipulating sound effects, mixing and finally mastering); optimising cinematics through post-synchronisation, editing and subtitle integration; then converting and compressing the final audio/video formats; and, finally, testing the beta assets and bug fixing. Unlike film visuals, which are difficult to alter, lip movements and the mimicry of virtual characters can be manipulated to match recorded voices, using cutting-edge technology.[10]

To conclude, digitisation and interactivity permeate a new operational environment with strict technical requirements (audio file formats and names, foreign character support, metadata manipulation etc.) but somewhat looser audiovisual constraints. In fact, one of the mantras of software localisation is that digital (and linguistic) adaptation must not cause technical hindrance on a functional level (Scarpa, 1996; Microsoft Excellence Team, 2007), in that functionality bugs, data corruption and system crashes are the first hurdle to product consumption. At the same time, game audio localisation is supposed to retain the “look and feel” of the original, proposing fresh and colourful dialogue (Mangiron and O'Hagan, 2006), which in the end is hindered by the current organisational model, especially due to blindfolded translation.

4. Conclusions

This contribution has addressed the issue of how traditional AVT modalities have been reinterpreted in the game audiovisual landscape. A central thread throughout the discussion has been the interplay with other audiovisual products, in terms of constitutional and semiotic features as well as management and adaptation processes. Much space has been devoted to the evolution of the video game medium and to its unique characteristics in relation to cinema, since the main answers to the initial questions can be found in the very nature of the medium, and not in the description of game translation per se. The focus has therefore been placed on causal factors – why game subtitles and dubbing are so unique – rather than on consequential factors – specific constraints and strategies at the level of translation. These require further investigation and descriptive examples, as well as further accounts and insights into the state of the art in the game AVL industry.

Game audiovisual assets have been shown to highlight very specific features that demand the expertise and flexibility of a dedicated industry. Due to the lack of standard practices in the creation and management of source audiovisual materials in first place, the game AVL industry has developed ad hoc organisational models and modi operandi in order to confront change, complexity and fragmentation. AVT modalities in the game sector are constantly evolving at the pace of technological evolution and are difficult to freeze in a unique organisational/operational model. The main reference framework in this discussion has been the outsourcing model, with a particular focus on European locales, yet further international and organisational perspectives need to be explored.

That said, it is worth concluding with a few words on the quality of game dubbing and subtitling. Throughout this comparative overview of AVT and AVL, no value judgement has been made of the processes described — for instance, claiming that subtitles consisting of full verbatim transcriptions are illegible. Not unlike other business and entertainment sectors, the gaming industry is driven by profit. International publishers have to bet on foreign localisations without any certain return of their investment. They can cautiously opt for partial localisations, excluding revoicing. Or they can prioritise time in relation to quality and “crash” the planned schedule in order to release international versions by Christmas, the peak period for game sales, consequently affecting the final quality level. In addition, since non-interactive sequences are bound to be repeated countless times within a game session and can therefore be skipped, they may wrongly be regarded as a non-crucial element for foreign audience enjoyment. A major operational hindrance is represented by the binding copyright restrictions imposed by intellectual property owners, due to which translators and adaptors cannot always access source audiovisual materials even in the dubbing studio. These considerations, however, along with the difficult technical tasks that game localisation operators have to cope with, do not alter the fact that game subtitling and dubbing could be much improved, drawing on the heritage of film translation. Hopefully, thanks to the growing interest in game localisation, the communication barriers between industrial segments, professionals and users will be removed, facilitating shared efforts and standards from product conception, to localisation and final reception.

References

Aarseth, Espen J. (1997) Cybertext: Perspectives on ergodic literature, Baltimore, MD: John Hopkins University Press.

Barthes, Roland (1997) Sul cinema, collection of translated essays, Eugenio Toffetti (ed), Genova, Il Nuovo Melangolo.

Bernal Merino, Miguel (2007) “Challenges in the Translation of Video Games”, Tradumática, 5: 1-7.  www.fti.uab.es/tradumatica/revista/num5/articles/02/02art.htm (accessed 15 October 2011)

----  (2008) “What’s in a Game?”, Localisation Focus, 6(1): 29-38.

Bittanti, Matteo (2002) The Technoludic Film: Images of Videogames in Movies (1973-2001), Master’s Thesis, School of Journalism and Mass Communications, San Jose State University. http://www.gamecareerguide.com/education/theses/20020501/bittanti_01.htm (accessed 20 December 2011)

----   (ed) (2008) Schermi interattivi. Il cinema nei videogiochi, Roma, Meltemi.

----  (2008b) “Cut scene: il cinema nei videogame”, Schermi interattivi, Online blog, http://www.scherminterattivi.org/2008/04/cut-scene-il-ci.html#_ftn4 (accessed 13 December 2011)

Blanchett, Alexis (2009) “Movies made into games: some data about adaptation…”, Le Blog Jeuvidéal, 21 June, http://jeuvideal.com/?p=283 (accessed 2 December 2012)

Bollettieri Bosinelli, Rosa Maria (1994) “Film Dubbing: Linguistic and Cultural Issues”, Il Traduttore Nuovo, 42(1): 7-28.

Bolter, Jay David (2001/2009) Writing space: Computers, hypertext, and the remediation of print, Mahwah, NJ, Lawrence Erlbaum, 2nd edition.

Bukatman, Scott (1993) Terminal identity: The virtual subject in postmodern science fiction, Durham, NC, Duke University Press.

Canudo, Ricciotto (1911) “The Birth of the Sixth Art” in The European Cinema Reader, Catherine Fowler (ed) (2002) London, Routledge: 19-24.

Chandler, Heather Maxwell and Stephanie O’Malley Deming (2005/2011) The Game Localization Handbook, 2nd edition, Massachusetts, Jones & Bartlett Learning.

Chaume Varela, Frederic (2004) “Synchronization in Dubbing: A translational approach”, in Topics in Audiovisual Translation, Pilar Orero (ed), Amsterdam & Philadelphia, John Benjamins: 35-52.

Crossignani, Simone. (2011) “Tips for successful games audio production”, Multilingual Computing, September 2011: 40-3.

Delabastita, Dirk (1989) “Translation and Mass Communication: Film and TV Translation as Evidence of Cultural Dynamics”, Babel, 35(4): 193-218.

Diaz Cintas, Jorge (2003) “Audiovisual Translation in the Third Millennium”, in Translation Today: Trends and Perspectives, Gunilla Anderman and Margaret Rogers (eds), Clevendon, Multilingual Matters: 192-205.

Eco, Umberto (1975) Trattato di semiotica generale, Milano, Bompiani.

Edwards, Kate (2011) “Levels of game culturalization”, Multilingual Computing, September 2011: 18-19.

Ensslin, Astrid (2010) “Black and White: Language ideologies in computer game discourse” in Language Ideologies and Media Discourse: Texts, Practices, Policies, Sally Johnson and Tommaso M. Milani (eds), London, Continuum: 205-22.

Esselink, Bert (2000) A Practical Guide to Localization, Amsterdam/Philadelphia, John Benjamins.

Fodor, István (1976) Film Dubbing – Phonetic, Semiotic, Esthetic and Psychological Aspects, Hamburg, Helmut Buske.

Gottlieb, Henrik (1994) Tekstning synchron billedmedieoversættelse, University of Copenhagen, DAO 5.

----  (2001) “Subtitling: visualizing filmic dialogue”, in Traducción subordinada (II). El subtitulado, Lourdes Garcia Lorenzo and Ana Maria Pereira Rodríguez (eds), Vigo, Servicio de la Universidad de Vigo: 85-110.

Greenfield, Patricia Mark (1991) Mind and media: The effects of television, video games, and computers, Cambridge, MA, Harvard University Press.

Grimshaw, Mark (2008) “Per un'analisi comparata del suono nei videogiochi e nel cinema, in Matteo Bittanti (ed) Schermi interattivi: il cinema nei videogiochi, Roma, Meltemi, 95-122.

Grimshaw, Mark, and Gareth Schott (2008) “A conceptual framework for the design and analysis of first-person shooter audio and its potential use for game engines”, International Journal of Computer Games Technology, 2008: 474-81.

Hancock, Hugh (2002) “Better Game Design Through Cutscenes”, Gamasutra, 2 April. http://www.gamasutra.com/features/20020401/hancock_01.htm> (accessed 15 September 2010)

Järvi, Outi (1997) “The Sign Theories of Eugen Wüster and Charles S. Peirce as Tools in Research of Graphical Computer User Interfaces”, Terminology Science & Research, 8(1/2): 63–72.

Laurel, Brenda (1991) Computers as Theatres, Reading, MA: Addison-Wesley Publishing Company.

Lowood, Henry (2008) “La cultura del replay. Performance, spettatorialità, gameplay”, in Schermi interattivi: il cinema nei videogiochi, Matteo Bittanti (ed.) Roma, Meltemi: 69-94.

Mangiron, Carme, and Minako O'Hagan (2006) “Game Localisation: Unleashing Imagination with Restricted Translation”, The Journal Of Specialised Translation, 6: 10-21. www.jostrans.org/issue06/art_ohagan.php (accessed 13 December 2011)

Mangiron, Carme (2012) “Subtitling in game localisation: a descriptive study”, Perspectives: Studies in Translatology,  21(1): 42-56

Mateas, Michael, and Andrew Stern (2006) “Interaction and Narrative”, in The Game Design Reader: A Rules of Play Anthology, Katie Salen and Eric Zimmerman (eds), Massachusetts, MIT Press: 642-69.

Manovich, Lev (1999) “What is digital cinema”, in The digital Dialectic, Peter Lunenfeld (ed), Cambridge, MA: MIT Press, 172-92.

Samuel J., Jr. Mantel, Meredith Jack R., Scott M. Shafer, and Margaret M. Sutton (2001/2005) Core Concepts: Project Management in Practice, New York, John Wiley & Sons, 2nd edition.

McLuhan, Marshall (1964) Understanding Media, Canada, Mentor.

McLuhan, Marshall, and Quentin Fiore (1967) The Medium is the Massage, New York, Bantam.

Metz, Christian (1972) Essai sur la signification au cinéma, Paris, Klincksieck.

Microsoft Language Excellence Team (2007) Microsoft Style Guide (English-Italian). http://www.microsoft.com/language/en/us/download.mspx (accessed 10 September, 2011).

Minazzi, Fabio (2007) “Tecniche di localizzazione audio”, Corso in internazionalizzazione e localizzazione del software, Webcen. webcen.dsi.unimi.it/wcinfo/index_corsi.php?corso=77501&anno_acc=  (accessed December 2008)

Newman, James (2005) Videogames, London, Routledge.

------ (2008) Playing with Videogames, London and New York, Routledge.

O’Hagan, Minako (1996) The coming industry of teletranslation, Clevedon/Philadelphia/Adelaide, Multilingual Matters.

------- (2004) “Translating into the Digital Age: The expanding horizons of localization”, paper presented at the 9th Annual Localisation Conference, (University of Limerick, 21-22 September). http://www.localisation.ie/resources/presentations/2004/Conference/index.htm (accessed15/01/2012)

Perez Gonzalez, Luis (2003) “Audiovisual Translation” in The Routledge Encyclopedia of Translation Studies, Mona Baker (ed), London: Routledge,13-20.

Project Management Institute (2004) A Guide to the Project Management Body of Knowledge(PMBOK Guides), Pennsylvania, Project Management Institute Inc.

Pym, Anthony (2004) The Moving Text. Localization, Translation, and Distribution, Amsterdam/Philadelphia, John Benjamins.

Salen, Katie, and Eric Zimmerman (2004) Rules of Play: Game Design Fundamentals, Cambridge, MIT Press.

Sandin, Dan (1998) “Digital illusion: Virtual reality, and cinema”, in Digital illusion. Entertaining the future with high technology, Clarck Dodsworth (ed), New York, Addison Wesley, 1-12.

Scarpa, Federica (1999) “Localizing packaged software: linguistic and cultural problems”, in Transiti linguistici e culturali, Vol. II, Gabriele Azzaro and Margherita Ulrych (eds) Proceedings of the XVIII AIA National Conference (Genoa, 30 September - 2 October 1996), Trieste, Edizioni Università di Trieste: 305-20.

Schäler, Reinhard (2003) “Making a Business Case for Localisation” Translating and the computer, no. 25. http://www.mt-archive.info/Aslib-2003-Schaler.pdf (accessed 06 June 2009)

Shirley, Jason (2011) “Games Localisation Audio Production”, paper presented at the Localisation Research Centre Summer School in Computer & video game localisation, (Limerick, 30 May – 2 June) http://www.localisation.ie/resources/courses/summerschools/2011/ JasonShirley_Audio.pdf (accessed 12 October 2011)

Sotamaa, Oli (2007) “Let Me Take You to The Movies: Productive Players, Commodification, and Transformative Play”, Convergence, (13)4: 383-401. http://www.uta.fi/~tlolso/documents/The_Movies_Sotamaa.pdf (accessed 13 January 2012)

Tarquini, Gianna (2010) “New Media, New Challenges for Terminology: the Semiosis of Electronic Entertainment”, Terminology Science & Research, 21: 1-12.

------- (2011) “Dubbing for fun: the case of cinematics”, in Minding the Gap: Studies in Linguistic and Cultural Exchange, Vol. II, Raffaella Baccolini, Delia Chiaro, Chris Rundle and Samuel Whitsitt (eds), Bologna, BUP: 133-44.

Thayer, Alexander, and Beth E. Kolko (2004) “Localization of Digital Games: The Process of Blending for the Global Games Market”, Technical Communication, 51(4): 477-488.

Wagner, Richard (1849) Das Kunstwerk der Zukunft, Leipzig, Otto Wigand, trans. William Ashton Ellis, The total art-work of the future, The Wagner Library. http://users.belgacom.net/wagnerlibrary/prose/wagartfut.htm#d0e983 (accessed 12 January 2012)

Wolf, Mark J. P. (2000) Abstracting reality: Art, communication, and cognition in the digital age, New York,  University Press of America.

------ (2001) “Space in the Video Game”, in The Medium of the Video Game, Mark J.P. Wolf (ed), Austin, University of Texas Press, 51-76.

Notes

[1] Game Studies deals with the critical study of video games drawing on a multiplicity of perspectives and methods: anthropology, ethnology, philosophy, psychology, narratology, ludology, media studies, semiotics, gender studies, game design and programming, to cite a few. The emergence of Game Studies has been characterised by a narratologists vs. ludologists debate regarding the tensions between storytelling and interactivity (Newman, 2005), discussed in section 2.1.

[2] The concept of remediation emerges as a key theoretical tool in new media studies. Remediation occurs “when a newer medium takes the place of an older one, borrowing and reorganising the characteristics [...] of the older medium and reforming its cultural space.” It is also a process of cultural competition between technologies, for the new medium refashions the older one making a more or less explicit claim to improve it. (Bolter, 2009: 23).

[3] Specifically, video game interactivity is regarded as a multivalent term that encompasses at least four levels of engagement: cognitive interactivity, “that is the psychological, emotional, and intellectual participation between a person and a system”; functional interactivity, involving the functional/ergonomic interaction with system peripherals and their usability; explicit interactivity, intended in the obvious sense of overt participation to the onscreen action through interface elements; and, finally, beyond-the-object interactivity, referring to “the interaction outside the experience of a designed system. The clearest examples come from fan culture, in which participants co-construct communal realities, using designed systems as the raw material” (Salen and Zimmerman, 2004: 59-60). The last interaction mode, also called “participation within the culture of the object”, is one of the most interesting phenomena to emerge from the video game medium. Video games can in fact be manipulated not only within game play, but also outside it. Transformative practices, such as bypassing or changing the game rule system, re-configuring storylines and even hacking code are intrinsic to user reception. Since the very first appearance of the medium, fans have been keen on appropriating video game content through more or less licit practices, such as modding, hacking and cracking (Newman, 2008). This phenomenon highlights an interesting parallel with audiovisual culture: while fansubs and fandubs appeared in the 1980s within anime fan communities, the practice of modifying original content is an original feature of video game participatory culture and is fostered by the affordance of the medium.

[4] Indeed, there are a variety of cut-scene typologies. Usually, cinematics are pre-rendered through animation techniques and appear visually different from the in-game 3D graphics generated by the game engine. Other typologies include “in-game cut-scenes” rendered by the game engine, and “live-action cut-scenes” that are shot with real life actors and then digitised. Popular examples of this technique are game cinematics directly drawn from Star Wars and Lord of the Rings movie scenes, and Matrix extra movie sequences specifically shot for video game digitization (Bittanti, 2008b).

[5] Indeed, the complex semiotic and digital nature of video games is manifest in the rich variety of textual components for translation, ranging from manuals and the online help, through the onscreen text (containing the storyline, character/item descriptions as well as technical instructions and buttons) to graphic files and the dubbing and subtitling script. Although in this paper we are mostly concerned with the latter, it is worth pointing out that video game translation also means dealing with typical localisation issues, including functionality and technical terminology, thus calling for a multifaceted approach. (Bernal Merino, 2007).

[6] See for instance Multilingual Computing: http://www.multilingual.com/

[7] While some publishers manage game localisation internally (in-house model), others resort to external companies, called language vendors (LV), specialised in the provision of localisation, translation and/or testing services (outsourcing model). These organisational decisions have important consequences at the level of translation, since in the outsourcing model translators can rarely access source content and communicate with developers.

[8] Experts identify four major localisation levels: No localisation, i.e. shipment into international markets without being localised; packaging and manual localisation, named "box and docs"; partial localisation, involving the translation of in-game parts without spoken dialogue; and full localisation, including dubbing. (Chandler and O’Malley Deming, 2011: 8-10). A more extreme localisation level, which may occur during the internationalisation stage, is called blending, "when the storyline (and GUI) aspects of a game must undergo significant revision for other cultures", entailing re-creation and re-coding. (Thayer and Kolko, 2004: 483). Blending tends to be implemented in cross-continental localisation projects.

[9] This terminological choice also highlights the semantic shift between localisation and translation. In the industry, in fact, the term localisation broadly refers to the linguistic, cultural and technical adaptation of software, while translation is restricted to the transfer of language and cultural elements incorporated in language. In audiovisual terms, localisation includes adaptation and post-synchronisation, while translation mainly refers to the preliminary rendering of the script.

[10] For further reference on game synchronisation typologies, see Minazzi, 2007 and Tarquini, 2011.

About the author(s)

Gianna Tarquini is a Postdoctoral Research Fellow at the Department of Interpreting and Translation of the University of Bologna, Forlì campus. She is currently working on the FORLIXT project and cooperating with the University of Strasbourg, CAWEB Master's Degree. Graduated in translation, she completed her PhD in English for Specific Purposes at the University of Naples Federico II with a research project on video game localisation, partly developed at the University of Limerick (Localisation Research Centre) and at the Kansai Gakuin University, Japan. Her research interests include audiovisual translation and multimedia databases, localisation topics, specialised language and translation and French Studies. Shes has worked as a a freelancer in the game localisation industry since 2006.

Email: [please login or register to view author's email address]

©inTRAlinea & Gianna Tarquini (2014).
"Playing Cinematics: Traditional AVT Modes in a New Audiovisual Landscape"
inTRAlinea Special Issue: Across Screens Across Boundaries
Edited by: Rosa Maria Bollettieri Bosinelli, Elena Di Giovanni & Linda Rossato
This article can be freely reproduced under Creative Commons License.
Stable URL: https://www.intralinea.org/specials/article/2068

Go to top of page