A post conference survey distributed among participants, showed that the overall quality, organization and accessibility rate for the conference was considered good to excellent. Survey participants unanimously answered "yes" when asked if they would encourage others to attend next time. Despite the conference being held online through Zoom, survey participants commented positively on the networking opportunities. One of the conferences’ main aims was to connect a wide variety of stakeholders to exchange ideas. In this respect, the participation of artists in the conference was appreciated by survey participants, and the OPEN team hopes such dialogue across profiles and disciplines can be strengthened in future events.
Limecraft: Lessons learned from successful deployments of Artificial Intelligence (AI) in subtitling, what works and what doesn't work, by CEO and founder Maarten Verwaest.
with Pablo Romero Fresco, researcher at Universidade de Vigo and Honorary Professor of Translation and Filmmaking at the University of Roehampton; Kate Fox, Access Manager at Manchester International Festival, and Padraig Naughton, Executive Director at Arts & Disability Ireland; Liza Sylvestre (multimedia artist and curator of academic programs at Krannert Art Museum in Illinios).
The Audio Description Company, a division of The Subtitling Company.
Earcatch, a free app that offers audiodescriptions for movies and TV-series.
Ooona offers a range of ground-breaking cloud based translation softwares.
Limecraft, workflows for video production.
Hoorcoach Regina Bijl
Flemish expertise centre Inter .
Create and edit captions and subtitles in any language online, using simple and intuitive web interface. State-of-the-art tools allow frame-accurate text timing with advanced timeline, video grid for precise caption positioning on the screen, audio waveform and scene-change detection for accurate subtitle spotting. Our tools feature import and export in almost any caption and subtitles format, including TTML/DFXP, VTT, SCC, CAP, EBU-STL, SRT, IMSC1, PAC, 890 and many more. We also support generating image-based subtitles for DVD/Blu-ray/DCP authoring. Powerful Pro apps allow you to run automated QA scripts to check and fix your files, as well as customize hotkeys and project settings.
Earcatch, a free app, offers users a large number of audio-described films and tv series. Audio-description is the service that makes film and tv, besides other forms of culture and information, accessible for people who are blind or visually impaired. Audio-descriptions describe mostly visual elements from audiovisual productions such as the characters, their facial expressions, the locations and time where or when the story unfolds, and it is rendered aurally by a voice actor. Earcatch allows for the audio-descriptions to be downloaded and played through the app and they are synchronized automatically with the sound track in the cinema or on your tv or computer, depending on where you are. The audio-descriptions on Earcatch are free and easily accessible for users both in Belgium and The Netherlands. In 2022 Earcatch will launch a new application for the theatre! Its innovative technology will make it possible to download pre-recorded audio-descriptions of theatrical performances. The new application will be launched in The Netherlands for three major musicals. Earcatch was developed by Soundfocus in 2015 and has been managed by Stichting Audiovisuele Toegankelijkheid since 2017. Mereijn van der Heijden of Soundfocus and Ellen Schut of Stichting Audiovisuele Toegankelijkheid will present the new Earcatch application during the OPEN Forum of 2022. They will also present a few other projects, among which the Eurovision with audio-description.
Automatic subtitling and localization with Limecraft: https://www.limecraft.com/
In 1757 dr. Le Cat won the essay competition of the Académie des Sciences in Toulouse with the essay The Theory of Hearing. When this medical essay was published, the book got this impressive frontispiece, showing goddess Minerva surrounded by numerous putti with hearing aids and the doctor observing and making notes on the scene. Last autumn the cover story in Hoordetail (nov.’21), a magazine for hearing care professionals in the Netherlands and Belgium, showed two smiling women talking about a high-tech invisible hearing aid with AI, that can remindyou of an appointment or translate what a Swedish waiter is asking you. In between the medical perspective on hearing loss and the (future) technological possibilities, we will put forward the idea of Hearing Happiness (dr.Virdi). Do you think that society has changed a lot or is going to change in terms of understanding and acceptance hearing loss? And what about understanding and acceptance of hearing loss in current society? Sofie and Regina, both having a bilateral profound hearing loss, both proud users of bilateral hearing aids and two working women of the 10% citizens with hearing loss, will illustrate the term Hearing Happiness by explaining that they love to see you, they love to hear you and they love to bring their gold (talents) to the table. In this presentation they furthermore will invite you on their journey to Hearing Happiness. In addition, they will introduce you to Sophia the goddess as a metaphor for enabling Hearing Happiness by exploring 1) What we need? 2) Why we need it? 3) How it will empower us, you and me, our society? Being hard-of-hearing implies an almost daily coming out in public. Sharing these experiences is vulnerable, but Sofie and Regina believe and hope that it will give you valuable and useful insights on the impact of hearing loss and that it will increase understanding and connection. They will share some good practices such as events in Arboretum, ZVA, HoorCafé, Red Star Line museum, that show how Hearing Happiness can be experienced and how accessibility for people with hearing loss can be achieved. By sharing their journey to Hearing Hapiness, by introducing you to Sophia and by showing good practices, Sofie and Regina advocate for a Hard of Hearing Friendly environment, that does not necessarily makes them hear better, but definitely makes them feel better and more connected. They hope to motivate you to take the time to answer the question: what would Sophia do? In this way,you will contribute to this HOH friendly environment.
If the Nuit Blanche experience has brought very positive outcomes, it has also given rise to many questions. Among the latter, we still wonder how to succeed in including this public in cultural events when our current societal realities are still far from being inclusive. A striking example was the implementation of the activity "Bar à Signes" within our Vaux-Hall Summer event in the summer of 2021. This activity aimed to offer both workshops to learn sign language and thematic evenings adapted to the deaf and hard of hearing. In spite of the great success of these animations, it was communicated to us that many hearing people felt excluded. This quite unexpected turn of events led us to reflect on the actions to put in place to make these communities, which have, until now, very often acted separately in society, cohabit and interact during art festivals and cultural events.
My research is about bringing under-heard voices into new music. Sometimes this includes the voices of those from marginalized groups like my ‘Express Yourself Sensory Opera’ project for children with PMLD (Profound and Multiple Learning Disabilities), at other times it has been the development of new ways to communicate something through music where previous avenues have proven insufficient. At its core, what is heard is that that has been enabled to be heard through a relationship that has been facilitated by music. Music enables the relationship, relationships are essential for safety and wellbeing, and feeling safe enables the voice to be heard in the music. Drawing from Christopher Small’s relationships in musicking and the ecological understanding of music in Gary Ansdell’s ‘How Music Helps’, I will present the stories of some of the most important relationships that have emerged through musicking over the course of my PhD and share how this has shaped my artistic practice and research.
One of the things that sometimes we do not consider when talking about speech or diction, is the importance of gesticulation when expressing an idea. We have heard many times, both within and outside of academic circles, that most communication is non-verbal. In fact, we might dare to say that the effect of speech is somewhat dependent on the physical gesture. For example, in Theodore Dreyer's film “The Passion of Joan of Arc,” the main character's stoicism is vividly portrayed by the actress's facial expressions. It is worth mentioning that this is a silent movie and yet we are always aware of the narration. The same happens in music but many times this relationship and the physical and psychological effects between gesture and diction are ignored by composers. An even general understanding of these relationships can lead to the composition of musical works capable of giving meaning and communicating the composers' intentions at a deep level. Understanding the importance of gesture in art work can help us bridge the gap between composer / performer and audience members who are deaf or blind. Therefore, in this conference we will address issues related to musical composition using not only speech but also gesticulation as an integral part of diction, but how physical gesture is an important tool when it comes to create inclusive art, without sacrificing the intentions of composers.
Video or audio files, e.g. of interviews, are still often transcribed manually and live events such as press conferences or lectures are rarely streamed with subtitles. Yet AI technologies already offer very good solutions in the area of speech-to-text. Automatic transcription solutions link acoustic sounds to words in a digital language model - similar to a digital dictionary. If these sounds have several possible matches - for example, due to unclear pronunciation - the automatic transcription software examines the overall context and assigns a probability to each possible word and selects the word it considers the most likely match. This analysis is driven by deep learning algorithms. Intelligent software converts the audio track of a video into text in a few moments and also provides a subtitle file that can be embedded in a video. Aiconix offers various transcription solutions. The Speech-to-Text solution for everyone is an app that can be used in SLACK. You drag and drop an audio or video file into SLACK Messenger and automatically receive the transcription and subtitle file. This easy-to-use app is ideal, for example, for students who need to transcribe 100 interviews or for a television production that needs subtitles or for journalists who record interviews with their smartphone and need to process them very quickly. Transcription is currently possible in 9 languages and has the additional function of automatically translating the transcript into other languages. Another solution from aiconix is live transcription. Most automatic transcription solutions are designed for post-production. However, such post-production solutions are not suitable for live events such as online conferences or streaming of sporting events. How it works? Imagine a speaker on stage giving a keynote speech. The microphone into which he speaks is connected to a laptop running cloud-based automatic transcription software. Everything the speaker says is sent to the cloud as an audio file, and there the AI technology matches the different sounds with words in a digital language model. The software then sends the text back immediately and this can be displayed in the stream so that everyone can read along. The data that the software uploads and downloads is very small, so the whole process happens very quickly. Live transcription is also possible with a slight delay so that an editor can check the subtitles again with the live editor and make any necessary corrections immediately before the stream goes online. With transcriptions one makes one's own content accessible to a larger audience, because some people cannot listen to videos due to a hearing barrier, or they do not want to turn on the sound and prefer to read along. In addition, these transcripts can also be further processed, e.g., for a press release, a blog post or for SEO. Aiconix clients include traditional media companies, public institutions, a German state parliament as well as newsrooms, medium-sized companies, and universities.
Ava is an application that captions speech-to-text on mobile devices, computers and laptop designed for group conversations. It's made to give deaf and hard-of-hearing not only accessibility to communication but also autonomy to have an equal conversation. Ava also provides real-time captioning with any online audio and video on laptops and computers. Not for every deaf/hard-of-hearing person plain spoken words transcribed into text is enough to fully understand the conversation. That's why Ava developed also a product combining AI and a human being who corrects the automated transcript. This way we use the technology to improve the inclusion and autonomy of our users.
Our accessibility system provides individual accessibility solutions for the performing arts via the use of mobile devices such as smartphones, tablets and smart glasses. Thanks to our professional surtitle software [SPECTITULAR], it’s now possible to select adapted inter- and intralingual surtitles, as well as pre-recorded video files for sign language and audio files for audio description on individual devices. We are pioneering this project and our solution has already been used and tested in situ. We’re happy to present the progress we’ve made and to encourage you to join us as partners on this adventure!
PictureLive converts visual information into a tactile and audible form. In this way, an image, drawing, painting, graphic, photo, etc. becomes accessible and understandable for those who have difficulty or cannot see them. The latest printing technology makes the image optimally tactile and a link to a description in an audio clip on your smartphone gives it added value for everyone. In this way PictureLive creates a unique experience that also gives people with a visual impairment the opportunity to actively participate in our image society.
Media and cultural accessibility is a vibrant and dynamic domain that is driven by the innovative approaches and enthusiasm of its many stakeholders. Access provisions have grown exponentially in recent years partly due to technological advances, legal support and increased awareness but also due to shifts in perceptions regarding what accessibility is or should be, especially with respect to accessible (cultural) content. Greater diversity in the domains in which accessibility services are provided and greater diversity in its target user groups have led to new approaches, new concepts and qualitative as much as quantitative challenges. Moreover, in some domains, the gap between accessibility providers and accessibility users seems to be narrowing. A key feature of such new approaches is the development of creative and artistically inspired access solutions that sometimes go as far as questioning traditional definitions of accessibility. They throw a new light on a range of issues such as user involvement, empowerment, artistic collaboration and the aesthetic power of access services.
These issues will be discussed by a panel of experienced access experts, including: Pablo Romero Fresco, researcher at Universidade de Vigo and Honorary Professor of Translation and Filmmaking at the University of Roehampton); Kate Fox, Access Manager at Manchester International Festival, and Padraig Naughton, Executive Director at Arts & Disability Ireland.
Do not hesitate to contact us if you have any questions!