The Present and Future of Accessibility Services in VR360 Players

By Marta Brescia-Zapata (Universidad Autònoma de Barcelona, Spain)


Technology moves fast and making it accessible turns it into an endless game of cat and mouse. Immersive content has become more popular over the years and VR360 videos open new research avenues for immersive audiovisual experiences. Much work has been carried out in the last three years to make 360º videos accessible (Agulló and Orero 2017; Fidyka and Matamala 2018; Agulló and Matamala 2019) and the chase continues. Unfortunately, VR360 videos cannot yet be classified as accessible because subtitles and/or audio description are not always available. This paper focuses on analysing to what extent the main accessibility services are integrated into the most popular and available commercial VR360 players. It also analyses the development of a fully customisable and accessible player which has been created following the EU standard EN17161 and a user centric approach. The first part deals with the relationship between technology and audiovisual translation (AVT) studies, followed by a brief overview of existing eXtended Reality (XR) content. Section 3 provides a list of the VR360 players available commercially and compares them in relation to the main accessibility services: subtitling for the deaf and the hard-of-hearing (SDH), audio description (AD) and sign language (SL). The last section presents a new player developed by the ImAc project, which has accessibility at its heart.

Keywords: accessibility, virtual environment, 360º video, subtitling, audio description, sign language interpreting, personalisation

©inTRAlinea & Marta Brescia-Zapata (2022).
"The Present and Future of Accessibility Services in VR360 Players", inTRAlinea Vol. 24.

This article can be freely reproduced under Creative Commons License.
Stable URL:


Immersive media such as virtual reality (VR) and augmented reality (AG) are technologies which have the potential to transform the way we work, communicate and experience the world. VR is capable of transforming and innovating traditional sectors such as manufacturing industries, construction, and healthcare. It can also revolutionise education, culture, travelling and entertainment. Immersive environments have enjoyed much popularity in video games and other commercial applications such as simulators for kitchen design or surgical training (Fida et al. 2018; Parham et al. 2019). While tools to generate these environments have been available for some time—the most popular being Unity— photographs and videos are growing in number. This is associated with the availability of cameras to record in 360º. In all cases, the aim is to provide an immersive and engaging audiovisual experience for viewers. The two major types of VR content are 360º videos (web-based) and 3D animations (synthetic or Unity-based). This article deals only with the former.

In this context, 360º video (aka VR360 video) has become a simple, cheap, and effective way to provide VR experiences (Montagud et al. 2020b). The potential of VR360 videos has led to the development of a wide variety of players for all platforms and devices (Papachristos et al. 2017); like computers, smartphones and Head Mounted Displays (HMDs). Traditionally, accessibility has been considered by the media as an afterthought, despite many voices asking for it to be included at the design stage of any process. Moreover, the lack of standardisation and guidelines in this novel medium has resulted in non-unified solutions, focusing only on specific requirements.

This situation has served as the motivation for exploring to what extent have accessibility services been integrated in the available most popular commercial VR360 players. This study is a necessary first step towards identifying the advantages and challenges in this novel field. The second contribution of this paper is the presentation of a fully accessible player, developed under the umbrella of the EU H2020 funded project ImAc[1].

The structure of this paper will now be presented. Section 1 deals with the relationship between new technologies and audiovisual translation (AVT) studies by giving a general overview, with a focus on the media accessibility (MA) field. To do so, this section is divided into two parts: the first outlines the state-of-the-art by summarising the main accessibility guidelines; the second focuses on new technologies as a catalyst, making the interaction between the final users and the main accessibility services (namely AD, SDH and SL) possible. Section 2 offers a general overview of XR content; focusing on VR, AR and 360º video. Section 3 analyses the degree of accessibility in the main commercial VR360 players that are available. After presenting the solutions offered by existing VR360 players, a fully accessible player (the ImAc player) is presented in Section 4. Finally, Section 5 offers a discussion on existing limitations, improvements and solutions provided by the ImAc player, as well as some ideas and avenues for future work.

1. Immersive media and AVT: an overview

The unceasing advances in Information and Communication Technologies (ICT) open the door to new fascinating opportunities within MA studies, despite also posing a series of challenges that this discipline has never before faced. This section deals with how the development of new technologies has affected the field of AVT. Firstly, by revisiting the existing accessibility guidelines and recommendations; and secondly, by analysing the importance of researching the tools and technologies needed to generate accessible content.

1.1. Accessible guidelines and recommendations

It is not yet possible to talk about immersive content “for all” because audio description (AD), subtitling for the deaf and hard-of-hearing (SDH) and sign language (SL) interpreting —among others— are not always available for 2D content, let alone 3D or XR.

Regarding legislation, in 2008 the United Nations (UN) issued the Convention on the Rights of Persons with Disabilities (CRPD), currently ratified by 181 countries. In Article 30 (section 1 b), the convention states that “State Parties […] shall take all appropriate measures to ensure that persons with disabilities: […] (b) Enjoy access to television programmes, films, theatre and other cultural activities, in accessible formats.” This convention has helped to promote the proliferation of accessibility services in media content. AVT, and more specifically MA (Remael et al. (eds) 2014; Greco 2016), is the field in which research on access to audiovisual content has been carried out in the last few years, generally focusing on access services such as audio description (AD), subtitling for the deaf and hard-of-hearing (SDH) or sign language (SL) interpreting, among others (Matamala and Orero 2010; Arnáiz-Uzquiza 2012; Romero-Fresco 2015; Fidyka and Matamala 2018).

The CRPD (UN 2006), together with the Convention on the Protection and Promotion of the Diversity of Cultural Expressions (UNESCO 2005) created legal repercussions within the scope of the accessibility of cultural goods in the European Union (EU). These two conventions resulted in two Directives and one Act that demand accessibility services not only on websites, services and spaces; but also for content and information offered at all cultural venues and events (Montagud et al. 2020a). The three pieces of legislation are:

  1. The EU Directive on the Accessibility of Websites and Mobile Applications transposed into the law of each EU member state by September 2018. It is based on Web Content Accessibility Guidelines (WCAG) 2.0 guidelines, and references EN301549 as the standard which will enable websites and apps to comply with the law.
  2. The Audiovisual Media Services Directive (AVMSD) approved in 2018 gave member states 21 months to transpose it into national legislation. It addresses key issues, for example: rules to shape technological developments that preserve cultural diversity and protect children and consumers whilst safeguarding media pluralism.
  3. The European Accessibility Act which takes the form of a legally binding Directive for all member states. It is a law that aims at making many EU products and services (smartphones, computers, TV programs, e-books, websites, mobile apps etc.) more accessible for persons with disabilities.

These three pieces of EU legislation demand accessibility services for information and content and at all cultural venues and events; as opposed to only websites, services and spaces. As is reviewed in the following sections, all audiovisual products need to be accessible, including those providing XR content for their audience.

1.2. The impact of new technologies in AVT and MA studies

Technical advances have not only modified media applications, but accessibility services and profiles also. Even end users have been affected by the advent of new ways to access media content. Traditionally, AVT and MA research has focused on the analysis of translated audiovisual texts and their many transformations or transadaptations (Gambier 2003). Some years ago, the translated audiovisual text was static in the sense that one format was distributed and consumed by all. Audiences are now allowed to make decisions about the media they consume, beyond the two traditional values: level of sound and image contrast. These days, technology allows for media personalisation (Orero forthcoming; Orero et al. forthcoming; Oncins and Orero 2020). When watching media content on TV or YouTube, subtitles can be read in different sizes, positions, and colours (Mas Manchón and Orero 2018). The speed of the content reproduction can also be altered, and the audiovisual text is increasingly changing according to individual needs. Media accessibility has shifted from a one-size-fits-all approach to customisation. Technology has allowed for this change. The Internet of Things lets customers decide the way in which each media object is set-up. Artificial Intelligence is now integrated and understands individual choices, needs and preferences.

On the one hand, technology is the basis of the tools used for creating or adapting content; on the other, it is the basis for consuming content (Matamala 2017). Regarding content creation and adaptation, there is a wide range of professional and amateur translation software for both subtitling (e.g., Aegisub, Subtitle Workshop, VisualSubSync, WinCaps, SOftwel, Swift Create, Spot, EZTitles, etc.) and AD (e.g., Softel Swift ADePT, Fingertext, MAGpie2, Livedescribe or YouDescribe). It is not that easy to find studies regarding preferences or comparative analysis of the previously mentioned tools. Concerning subtitling, Aulavuori (2008) analyses the effects of subtitling software on the process. Regarding AD, Vela Valido (2007) compares the existing software in Spain and the USA. Oncins et al. (2012) present an overview of existing subtitling software used in theatres and opera houses and propose a universal solution for live media access which would include subtitling, AD and audio subtitling, among other features.

The tool used by the users to consume audiovisual media is the media player. The choice of device (mobile phone, TV, PC, smart watch, tablet), type of content and the available accessibility services determine the level of personalisation (Gerber-Morón et al. 2020). Subtitles, for example, are displayed differently according to screen size, type of screen, and media format. The choice of the subtitle is made through the media player settings, making understanding its capabilities a basic departure point when analysing translated immersive media content.

The existence of such a wide range of technical solutions opens the door to many research questions. It remains to be seen how new developments will affect the integration of accessible services into existing tools to adapt to the needs of end users: everyone has the right to access the information provided by media services (including immersive experiences). This paper offers a valuable resource to content providers seeking to improve their products, users with accessibility needs in choosing the player that best suits their requirements, and to researchers setting out to identify the challenges and possibilities that this new field poses to AVT and MA.

2. The rise of VR and AR: VR360 video becomes mature

In this section, a general overview of VR, AR and VR360 videos will be given, highlighting the impact that these new technologies may have on our society at different levels.

Although immersive content production is still at an early stage and is generally used in professional environments such as hospitals and universities; in the future it might be used in the daily lives of ordinary users. As with all new technologies, VR will make our lives easier: from going to the supermarket to online shopping (Lee and Chung 2008). Immersive environments are the new entertainment experiences of the 21st century, from museums (Carrozzino and Bergamasco 2010) and theatres to music events such as opera (Gómez Suárez and Charron 2017). They allow users to feel as if they are being physically transported to a different location. Though the most popular applications are cultural representations, it is a great tool for larger audiences and functionalities such as leisure, sport (Mikami et al. 2018), tourism (Guttentag 2010) and health (Rizzo et al. 2008).

There are various solutions that can provide such an experience, such as stereoscopic 3D technology which has re-emerged in films during the last ten years (Mendiburu 2009). Nevertheless, this format is nothing new. It has been available since the 1950s, but the technology has not been ready to deliver quality 3D (González-Zúñiga et al. 2013). Both the quality of the immersive experience and the sense of depth depend on the display designs, which for 3D content are diverse and lacking in standards (Holliman et al. 2011). However, stereoscopy did not become the main display for AV products; perhaps due to the lack of standardisation, the intrusive nature of 3D, and uncomfortable side effects such as headaches or eyestrain (Belton 2012). According to Belén Agulló and Anna Matamala (2019), “the failure to adopt 3D imaging as mainstream display for AV products may have opened a door for VR and 360º content, as a new attempt to create engaging immersive experiences''. VR stands for ‘virtual reality’ and it takes on several different forms, 360º video being one of them. However, VR and 360º videos are two different mediums (see Table 1). In 360º video, multi-camera rigs (often static) are used to record live action in 360º, giving the consumer a contained perspective of a location and its subjects. VR renders a world in which, essentially, the consumer operates as a natural extension of the creator’s environment, moving beyond 360º video by enabling the viewer to explore and/or manipulate a malleable space. In 360º video, the consumer is a passenger in the storyteller’s world; in VR, the consumer takes the wheel. The storyteller directs the viewer’s gaze through this situational content by using elemental cues such as light, sound and stage movement. The traditional notion of the fourth wall has been eliminated.



360º VIDEO


Digital environment

Live action


Immersive world that you can walk around in.

360º view from camera’s perspective. Limited to filmmaker’s camera movements.

Video timeline

Video can progress through a series of events. Experiences can be held in an existing world to be explored by the user (6 degrees of freedom).

Video progresses on a timeline created by the filmmaker’s camera movements (3 degrees of freedom).


A full experience requires an HMD.

Available on 360º compatible players (desktop and mobile).


The filmmaker does not control the physical location of the viewer in the built environment and must capture attention and motivate the user to travel in the direction of the events of the story.

The filmmaker controls the physical location of the camera but must capture the attention of viewers to direct the story.

Table 1: Differences between VR video and 360º video (Based on Sarah Ullman)

Other less commercial immersive technologies are mixed and augmented reality. Paul Milgram and Fumio Kishino (1994: 1321) define those terms as:

Mixed Reality (MR) visual displays […] involve the merging of real and virtual worlds somewhere along the ‘virtuality continuum’ which connects completely real environments to completely virtual ones. Probably the best known of these is Augmented Reality (AR), which refers to all cases in which the display of an otherwise real environment is augmented by means of virtual (computer graphic) objects.

Julie Carmigniani and Borko Furht (2001: 3) define AR as “a real-time direct or indirect view of a physical real-world environment that has been enhanced by adding virtual computer-generated information to it.” The properties of AR are that it “combines real and virtual objects in a real environment; runs interactively, and in real time; and registers (aligns) real and virtual objects with each other” (Azuma et al. 2001: 34).

It is not easy to outline a state-of-the-art when talking about immersive environments as the development in VR technology is happening at an unprecedented speed. Moreover, Covid-19 has helped to accelerate virtual experiences. While it is early to evaluate, with the writing of this paper taking place during lockdown, what is now considered “the new normal” will have strong non-presential and virtualization elements. Systems and applications in this domain are presented on a daily or weekly basis. AR/VR technology makes use of sensory devices to either virtually modify a user’s environment or completely immerse them in a simulated environment. Specially designed headsets and glasses can be used for visual immersion, while handhelds and wearables offer tactile immersion. Optical devices such as Facebook’s Oculus Rift and Sony’s PlayStation Virtual have shipped millions of units as consumers look to explore the possibilities offered by virtual environments. Nevertheless, the adoption rate for AR/VR devices is relatively low when compared to other consumer electronics, though many of the world’s biggest technological companies see the promise of AR/VR technology and have begun to allocate significant budgets to develop it.

Stable growth of the VR and AR markets is expected both in Europe and around the world, as can be seen in graphic 1. According to a report by Ecorys, the total production value of the European VR & AR industry was expected to increase to between €15 billion and €34 billion by 2020 and to directly or indirectly account for 225,000 to 480,000 jobs. Also, wider supply chain impacts are expected to indirectly increase the production value to between €5.5 billion and €12.5 billion and generate an additional 85,000-180,000 jobs. Due to the strong growth of content-related VR activities, the share of Europe in the global market is expected to increase.

Figure 1. AR and VR global market growth, 2016-2025 (World Economic Forum 2017)

The 360º video has been around for several years with differing levels of sophistication and polish. However, with the advancement of camera technology combined with the development of software for handling the images, 360º video is being used in an increasing number of ways.

2018 was considered the Year of 360º video. Indeed, the trend of watching 360º videos on web browsers, tablets or mobiles has increased. Even if PCs, tablets or mobiles are currently still the main devices for watching 360º video, the use of VR Headsets is starting to grow as well. There are three main factors that could explain why the 360º video market is considered to be already developed: 360º video capture devices are more sophisticated and affordable, the increasing number of web and mobile players (YouTube, Facebook, Twitter and the use of mobile phones as HMDs), and the slightly decreasing price of VR headsets (HDM).

VR360 video is now being used by all kinds of people and organisations for sharing immersive stories and extreme experiences in stunning locations. Estate agents, airlines and the hospitality industry are embracing VR360 video to show off their goods; while broadcasters, educators and social media platforms are also experimenting with the format. Many videographers are now learning about VR360 video creation for various purposes and platforms, for example:

  • Immersive journalism. The New York Times has started publishing The Daily 360º, a short 360 news report – often in 4K resolution – with multiple cuts between static shots, and a reporter acting as the narrator. Done well, 360º adds a unique eyewitness feel to storytelling and reporting.
  • 360º time-lapses. Many 360º cameras allow the capture of time-lapses, which are sequences of 360º images recorded at set intervals to record changes that take place slowly over time. Speed up the frames and the effect can be exceptional for something as delicate as the Milky Way emerging at night.
  • Live 360º video cameras can now capture and live-stream spherical imagery and most popular online and social media platforms have recently been updated to support 360º content. Viewers can now tune-in to live 360º broadcasts and download the video afterwards.

3. VR360 players: web and mobile apps

In this section, a list of the main web VR360 players will be presented. Each player will be analysed based on how, and to what extent, they integrate the main accessibility services.

There is an increasing number of VR360 videos available over the internet and the majority of browsers support them, meaning we can enjoy watching VR content online with no need for a VR device. However, it is still necessary to download a 360º video player which can support them. Regular video players such as Windows Media Player do not support them, although they probably will in the future. Nevertheless, there are some players aside from regular video formats that also support VR videos. Most of them are available on the web, but the majority fail in terms of accessibility. The table in the Appendix shows a comparative analysis of accessibility services offered by the most popular executable players.

The selection has been made for two different approaches and from a descriptive perspective. Firstly, following a top-down approach, specialized media such as magazines and papers were taken as a reference to elaborate on the first draft of the list. Secondly, users’ opinions and comments in different online forums were taken into account to complete the final list.

GOM is a South Korean product from the Gretech Corporation, one of the best-known video players, mainly used for playing ‘regular’ videos but also supports 360º video. It is a multilingual player, as you can select the language of the player. It can play 360º videos downloaded to your computer and also directly from YouTube. It is possible to search and upload subtitles and to adjust several features, such as style and position. This player offers the ability to load two separate subtitle files: one displayed at the top of the screen and the other at the bottom.

Codeplex Vr Player is an experimental open-source VR Media Player for Head-Mounted Display devices like Oculus Rift. Not only does it provide the function of playing VR videos, but it also lets users watch 2D and even 3D videos. Its user interface is designed to be intuitive which makes it very easy to use. It provides a free version and a professional paid version.

Total Cinema 360 Oculus Player is a VR App for Oculus Rift developed by Total Cinema 360. It is specifically designed to capture fully interactive, live action spaces in high quality 360º video. It comes equipped with four demonstration videos that allow you to pause, zoom and adjust eye distance. You can also upload your own 360º video content for use with Total Cinema 360.

RiftMax was developed in Ireland by the VR software developer Mike Armstrong. It is not only a VR video player, as it allows you to interact with other people in scenarios such as parties or film screenings. It also enhances video with good effects that come out of the screen. It does not support subtitles and is only available in English.

SKYBOX was developed in the UK by Source Technology Inc and supports all stereo modes (2D and 3D, or 180º and 360º). You can choose multiple VR theatres when you are watching 2D or regular 3D videos; including Movie Theater, Space Station and Void. It supports all VR platforms: Oculus, Vive, Gear VR and Daydream. SKYBOX supports external text-based subtitles (.srt/.ssa/.ass/.smi/.txt) when you are watching a non-VR video. If a video has multiple audio tracks, you can click on the “Track” tab and select the corresponding audio track.

VR Player is a Canadian product developed by Vimersiv Inc. It is specially designed for playing virtual reality videos and is a popular program among Oculus Rift users. It plays not only VR, but also 2D and 3D videos. It opens media from multiple resources, such as YouTube URLs or cloud-based services like Dropbox. It allows “floating subtitles” for watching foreign immersive videos. So far, this player is only available in the English version.

Magix VR-X is a German product from MAGIX. The player supports Android, iOS and Windows with Oculus Rift, HTC Vive and Microsoft Mixed Reality. There are six languages available: English, Spanish, French, German, Italian and Dutch. Captions are only available in the Premium version (Photostory Premium VR).

Simple VR was developed in Los Angeles. It provides users with the simplest functions and can serve as a typical media player for users. You can play, stop and pause VR video through simple controls. In addition, it has a super enhancement mode which can improve the fidelity, contrast and detail of VR videos. It also allows something called “splitters” which take multi-track video files (such as .mkv) and feed the video decoder with specific video/audio tracks and subtitles that you can configure.

After a quick analysis of the players which allow subtitle files to be loaded, it can be observed that these files are rendered as a 2D overlay onto the video window. As there is still no specific subtitle file for VR360 videos, there is no information about where in the 360 scene the subtitle relates to. Regarding AD, some of the players provide support for selecting alternative audio tracks which can be used for playing the AD track. However, there is no mechanism for mixing the AD over the existing audio track in the player. Concerning SL, none of the players provide any mechanism for adding this access service. They also do not offer the possibility to overlay an additional video stream, which could be used for the SL service.

4. ImAc project and player

This section will present and analyse the VR360 video player created under the umbrella of the H2020 funded ImAc project following the Universal Design approach and the “Born Accessible” concept. The design departed from user specifications and took accessibility requirements into account in the development.

ImAc was a European project funded by the European Commission that aimed to research how access services (subtitling, AD, audio subtitles, SL) could be integrated in immersive media. The project aimed to move away from the constraints of existing technologies into an environment where consumers could fully customise their experience (Agulló, 2020). The key action in ImAc is to ensure that immersive experiences address the needs of different kinds of users. One of the main features of the ImAc project was the user-centred methodological approach (Matamala et al. 2018), meaning that the design and development of the system and tools were driven by real user needs, continuously involving users in every step. The player was developed after gathering user requirements from people with disabilities in three EU countries: Germany, Spain and UK. User input was gathered in two iterations through focus groups and pre-pilot actions.

The first step in the user centric methodology was to define the profile of the end users. Two different profiles were created: professional user and advanced home user. Professional users were considered to be those who would use the tools at work: IT engineers, graphic designers, subtitlers, audio describers and sign language interpreters (signers). On the other hand, the advanced home users were people with disabilities who consumed the media content:  the deaf, hard-of-hearing, blind, low vision users, and the elderly. To successfully profile home users, a number of considerations were taken into account beyond disability, such as level of technological knowledge and VR environments. This was decided in order to engage home users in an open conversation regarding their expectations and match them accordingly with the innovation. Only users with knowledge or experience in either functional diversity or technology were consulted. Other profiling features of the home users were oral/written languages (Catalan, German, Spanish and English) and three visual-gestural languages (Catalan Sign Language, German Sign Language and Spanish Sign Language). Other significant profiling factors were level of expertise in the service that the participant was testing (audio description, audio subtitling, sign language, subtitling), sensorial functionality (deaf, hard-of-hearing, blind, low vision) and age. As some degree of hearing or vision loss can often be linked to age, the elderly were included in the home users’ category.

Once the two groups of end users (advanced home and professionals) had been defined, they formulated two user requirements: home and professional requirements. The former described the functions exposed by the ImAc services towards consuming media, and the latter described the functions from a working perspective. Three versions or iterations of the requirements were carried out. The first version was based on focus groups in which the user scenarios created by the ImAc partners were evaluated by both professional and home users. User scenario refers to what the already identified user would be experiencing and how (e.g. how the interface deals with AD depending on angle of visualisation). The second was conducted after the pre-pilot tests, where prototypes of accessible immersive media content were presented to the target group of home users (i.e., for the visual access services, the tests focused on the preferred size of the area to display the services and the preferred ways of guiding the users to the speaker). In this way, more extensive feedback on specific issues could be gathered and the home user requirements were subsequently fine-tuned. The final iteration took place after the demonstration pilots which involved both professional and home users. The resulting list of final requirements provides the basis for the further development and quality assurance of the ImAc platform.

After compiling all the information regarding the end user profile and requirements, the user interface (UI) was designed. The aim was to have a concept that was flexible enough to extend the settings later, based on the results of the user testing. The main challenge was to integrate four services with a large number of settings, while avoiding a long and complex menu. The UI design to access accessibility services in the ImAc player was based on existing players (legacy players from catch up TV services, web players for video-on-demand and streaming services and VR players); taking these as a starting point, a design for a “traditional UI” was developed. An “enhanced accessibility UI” was developed in parallel, the two were combined, and the resulting ImAc player UI offers aspects of both. The implementation of the UI allows access to the accessibility services in the ImAc portal and the player reflects their status after feedback from the user tests.

The success of the ImAc player and the reason why it has been presented as the main example of an accessible player is because it follows both the “Universal design” and “Born accessible” concepts. The first term comes from the European Standard EN 17161 (2019) ‘Design for All - Accessibility following a Design for All approach in products, goods and services - Extending the range of users.’ It specifies the requirements in design, development and provision of products, goods and services that can be accessed, understood, and used by the widest range of users, including persons with disabilities. Along with the European Standard EN17161, there is an EU standard for accessible technologies: the EN301549 (version 3.1.1.: 2019). These two standards secure the concepts of Universal Design and Born Accessible. According to Pilar Orero (2020: 4): “the concept of Born Accessible closely follows EU legislation and has been proven to be successfully integrated in R&D activities and developments.”

5. Conclusions

More than ever, technology is enabling the empowerment of end users both as consumers and prosumers. However, technology is still designed with accessibility as an afterthought: away from user centric design. User interaction in today's Information Society plays a key role in the full social integration and democratic participation of all its citizens. Enabling easy access to content and guiding the user in controlling media services are two of the most recent UN and European media regulations. At the same time, UIs should also be accessible, as the demand for guidance is especially high for accessibility services. Some groups of users need to activate an accessibility service such as subtitles before they can consume the media content. In general, the default setting for a media service is to have all accessibility services switched off. Therefore, it is very important that activating and controlling the accessibility services is made as easy as possible. This article has focused on the accessibility of the media players which are currently available commercially to show the way towards full accessibility in VR, at a time when VR content production is beginning. The article would like to raise awareness of the real possibility of generating accessible VR content from the point of production.

As we have been able to verify by analyzing the most widespread players available commercially, none of them provide access to the full set of accessibility services. VR players do not focus on accessibility services at all and they have not been created departing from user needs. Following the Born Accessible principle to avoid this basic problem, accessibility (and multilingualism) must be considered at the design stage of any process. Most of the players that have been analysed are only available in English, leaving out the users with other linguistic realities. The documented media players that support access to accessibility services do not use aligned conventions for their icons/representation. Using a universal set to represent accessibility services is desirable, and for that reason ImAc uses a set of icons proposed by the Danish Radio for all users in all countries.

To finish, we are living a global change of the consumer landscape due to Covid-19. Confinement measures have changed user demands and moved them further towards the use of online services. The so-called “new normal” will bring a significant increase in remote online activities and virtual experiences will be essential in the near future. Marketing, simulation, leisure, training and communication will need to adapt to new needs, as well as to associate with them. The promotion of the Universal Design and Born Accessible concepts can have a significant role in achieving these goals and supporting the full democratic participation of all people, while protecting their social rights.

Appendix: VR360 players facing accessibility


(Deaf and hard-of-hearing)

(Blind and visually-impaired)



GOM Player[2]


No* (select audio track)



Codeplex VR Player[3]





Total Cinema 360 Oculus Player[4]





RiftMax VR Player[5]





SKYBox VR Video Player[6]

Yes (only in non-VR video)

No (select audio track)



VR Player[7]






No (only in the premium version)




Simple VR[9]






The author is member of TransMedia Catalonia, a research group funded by Secretaria d’Universitats i Recerca del Departament d’Empresa i Coneixement de la Generalitat de Catalunya, under the SGR funding scheme (ref. code 2017SGR113). This article reflects only the authors’ views and the funding institutions hold no responsibility for any use that may be made of the information it contains). This article is part of Marta Brescia’s PhD in Translation and Intercultural Studies at the Department of Translation, Interpreting and East Asian Studies (Departament de Traducció i d’Interpretació i d’Estudis de l’Àsia Oriental) of Universitat Autònoma de Barcelona.


Agulló, Belén (2020) “Technology for subtitling: a 360-degree turn”, Hermeneus, no. 22.

Agulló, Belén, and Anna Matamala (2019) “The challenge of subtitling for the deaf and hard-of-hearing in immersive environments: results from a focus group”, The Journal of Specialised Translation, no. 32, 217–235.

Agulló, Belén, and Pilar Orero (2017) “3D Movie Subtitling: Searching for the best viewing experience”, CoMe - Studi di Comunicazione e Mediazione linguistica e culturale, Vol. 2, 91–101, URL: (accessed 8 August 2020).

Arnáiz Uzquiza, Verónica (2012) “Los parámetros que identifican el subtitulado para sordos: análisis y clasificación” MonTi: Monografías de Traducción e Interpretación, no. 4, 103–32. URL: (accessed 1 August 2020).

Aulavuori, Katja (2008) Effects of the digital subtitling software on the subtitling process: a survey among Finnish professional television subtitlers, MA thesis, University of Helsinki.

Azuma, Ronald, Yohan Baillot, Reinhold Behringer, Steven K. Feiner, Simon Julier, and Blair MacIntyre (2001) “Recent advances in augmented reality”, IEEE Computer Graphics and Applications, 21(6), 34–47.

Belton, John (2012) “Digital 3D cinema: Digital cinema’s missing novelty phase”, Film History: An International Journal, 24(2), 187–195.

Carmigniani, Julie, and Borko Furht (2011) “Augmented reality: An overview” in Handbook of Augmented Reality, Borko Furht (ed), Florida, Springer: 3–46.

Carrozzino, Marcello, and Massimo Bergamasco (2010) “Beyond virtual museums: Experiencing immersive virtual reality in real museums” Journal of cultural heritage, no. 11, 4:452–458.

EN 17161 (2019) “Design for All - Accessibility following a Design for All approach in products, goods and services - Extending the range of users”, URL:,FSP_ORG_ID:62323,2301962&cs=1D28CFDC66E7CEF3CE441294CAA9FEABE (accessed 15 August 2020).

Fida, Benish, Fabrizio Cutolo, Gregorio di Franco, Mauro Ferrari, and Vicenzo Ferrari (2018) “Augmented reality in open surgery”, Updates in Surgery, no. 70, 389–400.

Fidyka, Anita, and Anna Matamala (2018) “Audio description in 360º videos. Results from focus groups in Barcelona and Kraków”, Translation Spaces, Vol. 7, no. 2, 285–303.

Gambier, Yves (2003) “Introduction. Screen transadaptation: perception and reception”, The Translator, no. 9(2), 171–189

Gerber-Morón, Olivia, Olga Soler-Vilageliu, and Judith Castellà (2020) “The effects of screen size on subtitle layout preferences and comprehension across devices”, Hermēneus. Revista de Traducción e Interpretación, Vol. 22, 157–182.

Gómez Suárez, Mónica, and Jean-Philippe Charron (2017) “Exploring concert-goers’ intention to attend virtual music performances”, AIMAC 14 International Conference on Arts and Cultural Management, Beiging.

González-Zúñiga, Diekus, Jordi Carrabina, and Pilar Orero (2013) “Evaluation of Depth Cues in 3D Subtitling”, Online Journal of Art and Design, no.1, 3:16–29.

Greco, Gian Maria (2016) “On Accessibility as a Human Right, with an Application to Media Accessibility” in Researching Audio Description. New Approaches, Anna Matamala and Pilar Orero (eds), UK, Palgrave Macmillan: 11–33.

Guttentag, Daniel (2010) “Virtual reality: Applications and implications for tourism” Tourism Management, Vol. 31 (5), 637–651.

Holliman, Nick, Neil A. Dodgson, Gregg E. Favalora, and Lachlan Pockett (2011) “Three-dimensional displays: A review and applications analysis”, IEEE Trans. Broadcast57(2), 362–371.

International Organization for Standardization (2015) ISO/IEC TS 20071-20 Information Technology — User interface component accessibility — Part 20: Software for playing media content, ISO.

Lee, Kun, and Namho Chung (2008) “Empirical analysis of consumer reaction to the virtual reality shopping mall”, Computers in Human Behavior, no. 24, 88–104.

Mas Manchón, Lluis and Pilar Orero (2018) “New Subtitling Possibilities: Testing Subtitle Usability in HbbTV”, Translation Spaces, 7(2), 263–284.

Matamala, Anna, and Pilar Orero (eds) (2010) Listening to Subtitles. Subtitles for the Deaf and Hard of Hearing, Peter Lang, Berna.

Matamala, Anna (2017) “Mapping audiovisual translation investigations: research approaches and the role of technology” in Audiovisual translation-Research and use, Mikolaj Deckert (ed), Frankfurt, Peter Lang: 11-28.

Matamala, Anna, Pilar Orero, Sara Rovira-Esteva, Helena Casas-Tost, Fernando Morales Morante, Olga Soler Vilageliu, Belén Agulló, Anita Fidyka, Daniel Segura Giménez, and Irene Tor-Carroggio (2018). “User-centric approaches in Access Services Evaluation: Profiling the End User”, Proceedings of the Eleventh International Conference on Language Resources Evaluation, 1–7.

Mendiburu, Bernard (2009) 3D movie making: Stereoscopic digital cinema from script to screen, Taylor and Francis.

Mikami, Dan, Kosuke Takahashi, Naoki Saijo, Mariko Isogawa, Toshitaka Kimura, and Hideaki Kimata (2018) “Virtual Reality-based Sports Training System and Its Application to Baseball”, NTT Technical Review, no. 16(3).

Milgram, Paul, and Fumio Kishino (1994) “A taxonomy of mixed reality visual displays”, IEICE Transactions on Information Systems, no. 77, 1321–1329.

Montagud, Mario, Pilar Orero, and Anna Matamala (2020a) “Culture 4 all: accessibility-enabled cultural experiences through immersive VR360 content”, Personal and Ubiquitous Computing, URL: (accessed 5 August 2020).

Montagud, Mario, Pilar Orero, and Sergi Fernández (2020b) “Immersive media and accessibility: Hand in hand to the future”, ITU Journal, URL: (accessed 11 February 2021).

Oncins, Estel·la, Oscar Lopes, Pilar Orero, and Javier Serrano (2013) “All together now: a multi-language and multisystem mobile application to make live performing arts accessible”, Journal of Specialised Translation, no.20, 147-164.

Oncins, Este·la, and Pilar Orero (2020) “No audience left behind, one App fits all: an integrated approach to accessibility services”, JosTrans, no. 34.

Orero, Pilar (2020) Born Accessible: Beyond raising awareness, URL: (accessed 5 August 2020).

Orero, Pilar (forthcoming) “Audio Description Personalisation” in The Routledge Handbook of Audio Description, Christopher Taylor and Elisa Perergo (eds), London, Taylor Francis.

Orero, Pilar, Rai Sonali, and Chris Hughes (forthcoming) “The challenges of audio description in new formats” in Innovation in Audio Description Research, Sabine Braun (ed.), London, Routledge.

Papachristos, Nikiforos M., Ioannis Vrellis, and Tassos A. Mikropoulos (2017) “A Comparison between Oculus Rift and a Low-Cost Smartphone VR Headset: Immersive User Experience and Learning”, 2017 IEEE 17th International Conference on Advanced Learning Technologies (ICALT), 477-481, doi: 10.1109/ICALT.2017.145.

Parham Groesbeck, Eric G. Bing, Anthony Cuevas, Boris Fisher, Jonathan Skinner, Richard Sullivan, and Mulindi H. Mwanahamuntu (2019) “Creating a low-cost virtual reality surgical simulation to increase surgical oncology capacity and capability”, Ecancermedicalscience, no.  13.

Remael, Aline, Pilar Orero, and Mary Carroll (eds) (2014) Audiovisual Translation and Media Accessibility at the Crossroads: Media for all 3, Amsterdam, Rodopi.

Rizzo, Albert, Maria Schultheis, Kimberly A. Kerns, and Catherine Mateer (2004) “Analysis of assets for virtual reality applications in neuropsychology”, Neuropsychological Rehabilitation, no. 14:1-2, 207–239.

Romero-Fresco, Pablo (2015) The reception of Subtitles for the Deaf and Hard of Hearing in Europe, Berlin, Peter Lang.

UN (2006) Convention on the Rights of People with Disabilities, URL: disabilities.html (accessed 15 August 2020).

UNE-EN 301549 (2019) Accessibility requirements for ICT products and services.

UNESCO (2005) Convention on the Protection and Promotion of the Diversity of Cultural Expressions, URL: (accessed 15 August 2020).

Vela Valido, Jennifer (2007) Software de audiodescripción: análisis comparativo de los programas existentes en España y EE.UU, MA thesis, Universitat Autònoma de Barcelona.


[1] See [retrieved 11/02/2021]

[2] See [retrieved 05/08/2020]

[3] See [retrieved 05/08/2020]

[4] See [retrieved 05/08/2020]

[6] See [retrieved 05/08/2020]

[7] See [retrieved 05/08/2020]

[8] See [retrieved 05/08/2020]

[9] See [retrieved 05/08/2020]

About the author(s)

Marta Brescia-Zapata is a PhD candidate in the Department of Translation, Interpreting and East Asian Studies at the Universitat Autònoma de Barcelona. She holds a BA in Translation and Interpreting from Universidad de Granada and an MA in Audiovisual Translation from UAB. She is a member of the TransMedia Catalonia research group (2017SGR113), where she collaborates in the H2020 project TRACTION (Opera co-creation for a social transformation). She is currently working on subtitling for the deaf and hard of hearing in immersive media, thanks to a PhD scholarship granted by the Catalan government. She collaborates regularly as subtitler and audiodescriber at the Festival INCLÚS.

Email: [please login or register to view author's email address]

©inTRAlinea & Marta Brescia-Zapata (2022).
"The Present and Future of Accessibility Services in VR360 Players", inTRAlinea Vol. 24.

This article can be freely reproduced under Creative Commons License.
Stable URL:

Go to top of page