The Case For Captions

Talk by Abha Thakor

Captioning online video content should be an integral and indispensable part of the making process.

Abha Thakor explores the reasons for this, in a talk which covers the importance of inclusion, accessibility and a wider business imperative. She looks at the availability of today’s technology, and how it has moved from the clunky subtitling of the past to the smart use of closed captioning. She also considers the impact of AI on the process.

Participants will be given details of how to get involved in the WordPress community’s work on subtitling. You will leave this talk with the ability to improve your global messaging reach through easy to understand steps.

View Abha’s Presentation

Transcript

Abha Thakor: Well, thank you for for joining me and I’m gonna see if I can minimize that window and hopefully that won’t disturb anybody who’s trying to read the text as well. So, um, it’s lovely to be here for WordPress Accessibility Day. We’ve got two cameras running, we have a number of people in our team and in my work organization who are going to be lip reading today and they’re also doing a live translation because it’s translation week for for WordPress as well so um you’ll have to be very understanding that I will be looking at two cameras at one point just so that we can help those people who are struggling a little bit yesterday when they were trying to follow um in the practice, okay.

So I’m Abha Thakor. I’m from NonStopNews and NonStopBusinessSupport and I’m here to show you why you should love captions and how you could become a subtitling champion we’ve prepared a link for those that I’ve said who are, who are lip reading and I’ll try not to to move too far away from the the text and the jargon that they’ve already inputted into the AI system so that they can have an easier time of translating us today.

So, we’re going to look at “to caption or not to caption” but in my view actually there is really no question. We need to be doing this. I’m not going to be focusing on the how but more on the why. So why are we talking about subtitling today? Well, simply, if you do not have a finished video product unless you have subtitles. And hopefully through this presentation, you’ll see why that is so important.

This streaming video has increasingly accounted for a large proportion of all internet usage.

The most recent survey by statistia – a company which researches key aspects of business data worldwide – found that video represented up to 95% of all internet usage in some of the countries they sampled. Research since continues to show that in most countries the trend for online streaming of video has had a big growth trajectory, During the pandemic, not surprisingly, it has soared.

The effects of the COVID situation have resulted in a further explosion of demand for online streaming access from businesses, from social organizations, special interest groups, and private individuals across the world. So online video consumption really cannot be ignored by businesses or other organizations. It needs to be part of your marketing strategy,
your engagement, your knowledge management, and ultimately your sales.

In increasingly crowded marketplace and with more sophisticated production software now available, it makes no sense to omit that final part of what should be in all of our checklists, and I hope after today you will have this in your checklist: subtitling and captions.

First, let’s have a look at what these things really are. That is, of course, from being life-changing for some of our colleagues, vital learning tools, and increasingly essential for search engines.

Basically, if you don’t have that subtitle this is what is going to happen:

People won’t be able to access it in a variety of mediums, and search engines certainly can’t. We’ll be talking more about that through this presentation. Now, we often come across two terms of subtitles and captions as if they mean the same thing. But there are actually three terms that are widely used in the media production.

Subtitles, open captions, and closed captions as you can see on the screen. That’s what the technical definition is. They do not actually mean the same thing. At the moment there are three main types.

We’ll come to images and the new type of captioning that is coming in later on.

Fundamentally they all mean a visual on-screen text representation of the words heard in an audio track, or a video, or your broadcast. You’ll often hear – hear these terms interchangeably, but you do need to know which of the terms that you actually need for your product and your organization, depending on the environment, and the ecosystem, the regulatory framework that you’re operating in.

So captions come in two varieties – open or closed. Both go further than simple subtitling.

They not only represent the spoken words, but they include text descriptions of all non-verbal sounds which may form part of that presentation. So captions amplify the dialogue, the narrative, with descriptions of sounds that would be included through the subtitling of that spoken language.

So, for example: woman coughs loudly; sound of breaking glass; dog barks.

(I always wait for my dog to actually see that as a command.); doorbell rings; music plays. This is to try and get as much closely to the atmosphere and feel of the audio track. It’s much more interesting for the person who can’t access the words or the sound, to have a better idea of what is being represented.

Providing captions can have a greater implication on production time, and costs and subtitling. But both continue to become easier to create and synchronize through the development of enabling software and artificial intelligence. As a firm, we do a lot of work in this area, and it’s so wonderful to see it becoming more accessible. I’ve mentioned two forms of captions.

Open captions are captions embedded in the video at production stage, and are on screen permanently. The viewer cannot choose whether they appear or not.

We’ve – I’ve kept this slide up, because when I’ve done this presentation before, people have said they felt it was easier for them to, to understand the difference when we have this up. So ..um, closed captions are interactive in what they can be turned off or on or off at the choice of the viewer. A quick toggle of the remote control and, hey, presto – captions on, captions off. A bit like the mute button for audio volume.

Captions can be a legal requirement in some jurisdictions for videos intended to be broadcast in public, and they’re enforced through the various countries legal systems. As ever I would urge you to look at the regulations in your own jurisdiction – and you may find these in a variety of places, from broadcasting legislation, equality, anti-discrimination regulations, to communications and media laws – but also in the ethos and values of your own organization.

The penalties for infringement can be high, and it’s worth including the captions and subtitles in your initial brief, in your scoping. In some locations require – the requirements apply to both public bodies and organizations with registered charitable status, so do your homework. Check it what it is that you need.

So, apart from legal compliance where it applies, why, why should we provide subtitles and captions? Well, subtitling is a way to make that video material more accessible. It actually makes it possible for some one with a hearing difficulty to hear the message behind the video at all.

But there are a lot of myths about subtitling and why you shouldn’t do it, so hopefully today we’ll debunk some of those myths. Hearing disabilities can be caused by a large variety of reasons, from disabilities from birth, trauma, injury related, and from actually just getting older, as well as health related issues. They may be temporary, they may be permanent.

We have a number of colleagues in my work and in the WordPress community who have some kind of hearing impairment. In our work in both of these areas we’ve been running awareness surveys around what people think subtitles are for. Our panel found that there were myths about subtitling that continue to exist, and continue to be shared.

People think that it’s outdated. They think that when someone appears on a video, words pop up automatically underneath as soon as someone clicks a button. They thought that subtitling is there for people who do not have a computer with sound – you will be surprised how often that comes up as a reason!

People also think that accessibility does not relate to video. And the one that we increasingly hear – it’s for old people, or for people who really aren’t our customers, so therefore we don’t need to worry about it.

The American National Association of the Deaf estimate – well, I’m waiting for my slides to catch up on your screens – so, I’m hoping that they have.

Um, so hopefully you’ve got, you’ve got a screen that shows a blue screen and orange, but if not I’ll talk over it. The American National Association of the Deaf estimates that five to six percent of the global population has some form of hearing impairment. That’s, at the time of their investigation, translates to around 350 million people, who, assuming they have access to technology in order to watch streamed video, would be able to benefit from subtitling or captioning.

Even allowing for a proportion of those not having the means to access the internet there is still a massive potential audience to cater for. On the 1st of March this year (2020) the World Health Organization estimated that 466 million people have had a disabling hearing loss. It predicted the figure rising to 900 million by 2050 – to 900 million by 2050, unless concerted action to address some of the preventable causes was taken.

Hopefully you’ve now got the, that figure on screen. Just, let’s just think about that figure. That’s a predicted 900 million people by 2050 who could miss out on your messages, online promotions, and offers, and that’s just from people who can’t access your message because they have a hearing issue. On top of that, you have all those people who can’t access the message for a variety of other reasons.

Let’s have a look at some of those reasons.

These are just some of the ways that subtitling or captions can be used and are being used.

Subtitles facilitate the translation of the original language medium into other languages. Today, I’ve provided subtitles which are being translated into a number of languages as part of WordPress translation celebrations this week. To get your talks and videos used around the world, make sure you add subtitles, as it makes it so much easier, and in some cases actually makes it possible for others to translate the text into their own languages. This might be that they translate in their heads as they watch along, or that they use apps when they’re reviewing what you have told them.

For those of us whose first language is English it can be hard to comprehend – comprehend how much this translating function of subtitles matters in everyday life. Subtitling television programs, films, and online video entertainment is much more common in countries where English is not the main language.

The need for these language tools to be part of business plans and marketing campaigns is frankly much better understood in many of those countries. We also need to remember: we live in a multicultural and multilingual world. Your customers and stakeholders in your target locations are likely to come from different language backgrounds and have varying levels of fluency. Trends show that the world’s increasing appetite for online learning is matched by its demand for being able to access videos while on the move.

Train and bus and travel with improvements in wi-fi have seen commuter journeys turn into mini classrooms for one. With noise cancelling headsets creating the complete study experience.

In our current times, audiences often watch and want to watch video content in busy households, particularly during COVID. Or in bed, and don’t wish to disturb other people. Much of this learning and assimilation is only maximized through having subtitles. They also cut through the noise that surrounds us on public transport, shared offices, or wherever we choose to access the information.

There may be times when it’s helpful to watch a video individually or as a group without the sound on in places where background noise may make it impossible to have sound at audible levels. Please think about those occasions where you have found that, and how much more useful you found it, if there had been subtitles.

Many people do not have the sound on on their mobile devices while traveling or in offices. Increasingly, content is being accessed, videos are being played on those mobile devices.

A colleague of mine in the subtitling campaign, Sean Cunningham – Siobhan Cunningham from the Yoast Academy’s e-learning team, she, um, she always explains: it is about the word.

If you see someone talk you might take in some of what they’re saying, but by having subtitles and being able to follow the words that makes it easier for you to follow what they’re saying, and for your brain to prepa… and – and you’ll have to bear with me, I need to just check. The lip reading people just asked me to pause for a moment.

Okay. And it makes it easier for your brain to process it, and remember it, and better understand what you are learning.

She said that if she sees words that tell her about what the person is talking about, she’s more likely to listen, more likely to stay, more likely to finish the video. We all know. We’ve all seen the statistics of how long people stay on video. People’s attention spans can be extended if we help them through subtitles. If we’re not in quiet environments but need to access learning material online, it may be the only time of the day that we can spend our own development or keeping up with training, that is now essential for our everyday work.

All those really great talks from WordCamps that you would like people all over the world to access? Subtitle them. Someone might then be able to access them at a time that suits them – an environment that they are currently in.

From an education and training perspective, provision of visual representations of technical terms, jargon, and acronyms allow for the reinforcement of learning.

The three key learning styles that I’m sure you’ve heard of elsewhere are visual, auditory, and kinetic. And you should hopefully see these now on screen.

That basically means that people learn visually through seeing, auditory through hearing, and kinetic through doing. All of these styles of learning are better supported when the visual picture and the audio commentary is reinforced with a visual underscoring of text.

Even without the express intention of educating, provision of subtitles can even subliminally help speakers of other languages acquire or appreciate subtleties of the original language. Usage, slang, the idiomatic phrases we all use, spelling and much more.

The automated transcription software service Sonix has found that 80 percent of all social media videos are viewed with the sound off. A statistic which might make you think that doing that defeats the object of the medium. But be that as it may, it also poses an opportunity to reach that switch off audience more effectively by using subtitles. Being able to read information to absorb the content helps us to understand, to respond, explore, and apply that knowledge.

Subtitles really do reinforce learning. I’ve worked in learning and CPD for nearly 20 years or more now, and every time we have subtitles, I know the learners are learning more, they’re accessing more, they’re retaining it more, and they’re being able to apply it.

Subtitling can now be achieved increasingly quickly and economically through newer technology and online tools.

Please try and keep that in your minds and think about how you would want to access content.

People often say “is a picture worth a thousand words.” Well, if you can’t hear the words of a talking picture and then no clues of what is being said, how is that person going to take in your messages? How are they going to learn?

Do you really want them to just be watching paint dry?

And for the people in the audience who are going to tell me that watching paint dry is interesting… then we’ll have that discussion after the talk.

So… search. Search loves subtitles, and for a lot of people they think, well, why would it? Because search is an index. Video, well, Googlebot and and other spiders which are crawling around – (I always have to shiver when I think about these spiders), um, they’re incessantly indexing web pages.

They can’t, they can’t watch those moving images and they can’t listen to the audio – not yet, not really, but they can hone in on captions. Videos with closed captions rank higher on Google search than those without according to a study carried out by US broadcaster American Digital Networks.

In addition, some advanced searches allow for more specific searching of closed captions. It goes without saying that additional traffic directed to your site by a high search ranking, will lead to an increase in traffic to other landing pages or content you want to bring to the party. The benefits for engagement and SEO combined make a powerful argument for ensuring your online videos are posted with accurate, legible, closed captions.

Legible I’m gonna stop on for a moment. Please, if you’ve not sure about how to do legible, there have been other talks during this accessibility session which will explain how to do that, how many words to have on screen, the font sizes.

Please, if you’re going to create captions, don’t create them so small and all jumbled together that people can’t access them.

The Scottish audience development company um Culture Republic it runs workshops in captioning and they assert that adding captions to YouTube videos has shown to increase completed viewings by around seven percent. That also helps your search engines, ratings and it helps people find your content, stay on your content, and refer people. They also quote the research which shows that 85 percent of Facebook views occur with muted audio.

Again this backs up the videos, that this, the um, Sonicx statistic. Um, I will, I will just apologize I’m running a. running a temperature at the moment, so I’m having a little bit of difficulty trying to to, um, to focus on the on the mouse on the screen. So apologies if the screens aren’t changing quick enough.

Basically, captions serve as a reminder to the viewer that we are now at a segment of particular interest and to switch full attention back to it.

Reinforced messaging through the use of captions is so important. If the consumer is going to behave in a selective manner, we need to ensure he or she knows when to exercise that selectiveness.

Captioning allows that to happen better. We’ve talked about the bots, but just to reinforce it – search, search, search. They don’t understand it. If they’ve got no value they will move on.

I’m going to take you down to what’s happening now with artificial intelligence. As with all things tech related, times move on and processes are in a state of constant evolution. Captioning for the last 10 years or so has used differing technologies including automated speech recognition known as ASR. ASR is a predictive tool.

It generates word sequences from an audio signal. The result is not always a totally accurate rendition of what is being said. The nature of live streaming, especially, where conversation is interactive, presents challenges to ASR. Not all speakers will be clear enough in their delivery. Background noises in their environment can seem like a verbal sound to be represented in captions to ASR.

How well has it been taught to differentiate between sounds. If discussions get heated, people may speak over each other or cut another speaker off abruptly. Again, resulting in a muddled set of captions.

In fact you don’t even need that to happen – you can just have lots of different voices that are similar and ASR may not pick it up.

Strong regional or national accents, or varieties of inflection from different speakers can confuse ASR, and result in misrepresentation in captioning. I’m sure we’ve all sat through presentations, even TV programs, where the captioning has been so obviously incorrect it can almost be comical. Except for those who rely on it. For them it really is not a laughing matter.

You’ll all probably be aware of the variety of accuracies and inaccuracies delivered by today’s range of streaming platforms in terms of captioning. The consequences of misrepresentation via captioning are the same as the consequences of mistranslating between two languages.

To be effective, the ASR system, like Web Captioner and others, needs to have been prepped with a wide vocabulary and specialist words. It can also be taught to learn in multiple languages, and any specialist acronyms, jargon, tech terminology, and so on which it might encounter.

If it doesn’t recognize the content of the source signal, it cannot predict the patterns of words. You will sometimes see a “next best” word or phrase appear, which of course will often make a nonsense version of the original. So one of the most often questions I get is, “Oh, I’ve added AI to my streams, I’ve added, um, an automatic captioner, that’s all I need to do.” No. You need to train it. You need to help it. If it helps, think of it as your pet dog. If it doesn’t know what the instruction is, it won’t know what to do. If it’s never heard the word “JavaScript” it won’t know what to say.

During the past year, the huge jump in demand for online conferencing streaming due to COVID restrictions has put a greater demand on existing automated speech recognition systems. AI engineers, we’ve all been working really as fast as we can to try and make things easier, to make things available. But a plea: help us do so. Help us test what is out there. Help us – tell us where it doesn’t work, because then we can change it.

The focus shouldn’t just be on speeding up the process, it needs to also be about improving accuracy.

FaceBook live has been using um AI. It’s been do, it’s been doing away with one third of the components of a basic ASR system. And train, so basically, it’s been getting rid of the pronunciation lexicon and it’s training the part which identifies the individual constituent sounds of a word, to directly predict the characters of a word. A language model then determines the interrelationship of these words.

Say, that frequency of use and words which are commonly strung together becomes what it thinks you’re going to be saying. The slimming down and improvement of accuracy in ASR was achieved through using tools, like PyTorch, was an open source machine learning library derived from Torch library. Please do have a look at it. There are lots and lots of ways you can help improve that.

Captions open up even more accessibility options.

We’ve already talked about the impact of captions for translation. Another development brought through artificial intelligence is the visual images being created based on the descriptive text contained in that caption. This is going to be our next generation of captioning tools. It’s in its very early stages.

We have AI which can produce captions from data contained in images, but only this week the Allen Institute for AI announced it had developed an AI which produces imagery created from those text captions. The results, for those who may not have seen them, are so far restricted from looking like Picasso might have knocked up in a hurry, but their principle is that they generate an image. They allow somebody who may not be able to understand without an image added to those words the full context of what is being said.

Or they may not be able to process it because their neural processing is different. This is going to take what we can do with captions to a whole new realm. It’s going to make even more accessible information. But if you don’t have captions, if you haven’t introduced subtitling into your business plans already, when this technology develops you won’t be able to use it, because it relies on having those subtitles in the first place.

Eventually, the work that is happening on these developments is that the final image will be of photographic quality with color, scale, and proportion all correctly represented. So when I’m talking about an apple and how that relates to a concept i’m describing, that would also appear. If I’m talking about a flowchart, eventually that would also appear. Again, it requires us to help teach it, but that is where the technology is going.

A simplified explanation of that process is that a machine learning algorithm has been trained to recognize conceptual connections between language and visual data. An illustration of this would be that given that text “a clock tower in the middle of town,” the technology produces an image of a taller tower-like structure with a clock at the top surrounded by less high buildings.

Suddenly, people are receiving the spoken word, the written word, and a visual representation.

Just think how powerful that combination is going to be.

We already have tools like Web Captioner. If you are doing an event and you don’t have resources to purchase live captioning done by humans, which is a preferred option, please look at tools like Web Captioner. There are other ones on the market, too. They use Google APIs generated to, to create captions, but invest the time. Invest the time in actually training it.

We tested this in a number of WordCamps including WordCamp Dublin last year, and we asked all speakers to practice saying a couple of sentences from their talk into Web Captioner. Picking the right option that successfully produced the text that they were saying.

We also asked them to tell us the words that and the jargon that would not be necessarily known or that may with a strong accent sound slightly different to what Web Captioner had learnt already. We inputted that text into Web Captioner, and it was wonderful to see that in most cases that improved the captions that we produced to a 98% level. It didn’t get everything – it still had issues.

“Huddersfield” is always going to get – be heard wrong. It’s just one of those words that captioners – an auto captioner – just does not get right, but it’s a huge step forward. Since WordCamp Dublin there has been much more developments and tests in tools like this. Web Captioner has, in the last few weeks, released a beta version which you can test, and you can add to things like Zoom, to other video conferencing materials.

We’re currently doing tests on this in the WordPress Marketing and WordPress TV teams as well as tests with other facilities like AWS, and how we can use auto-generated captions from, from those facilities to then correct and start subtitling more and more videos, and with your help making that possible.

OBS, which is a open source software also now has a beta version of captioning. So, if you are doing a talk and you’re going through OBS software for that talk, and connecting that to Zoom, StreamYard, whatever you’re using. There is a beta version that you can now enable and you can, and that live auto captioning can appear. The more people use it, the more it will improve, but we need people to use it, test it, to give feedback, correct it. We need more accents to do it, we need people with more accessibility issues, to actually try the software. In my work, we have a number of people who have hearing difficulties who are actually now testing some of this software.

I focused this talk on the why part of captioning.

If you’ve been convinced – and I hope you have – that this is something you really should be doing, I suggest you look at the presentation slides from yesterday’s speaker, Meryl Evans, who’s given a handy list of tools which you can use.

People don’t like being in the dark. Help put the light on.

You can also do that, and make a difference by becoming a subtitling champion. It really is easy. I will guarantee you will learn a new skill, and we will help you learn that new skill.

You can also play a part in knowledge sharing, and the skills that you learn, and the knowledge that you share, is much wider than the WordPress environment, the open source environment. It can make a difference in your products, your company, your culture. It can make a difference in how we deal with information and people’s access to it. Please invest in subtitling. Please don’t put up half thought about solutions. Work out how it’s going to work.

For a lot of the, um, the camps, today’s event, too, they also have another option where you can read subtitles that can be changed into different fonts. YouTube subtitling for live feed is really difficult to read. It’s very difficult to process if you have any kind of neuroprocessing issue. It can be very difficult if you just wear glasses.

Support the WP diversity and tech accessibility. It will give you huge rewards, but it also means that the people who really need the technology don’t have to be making that call themselves all the time. This should be something that we do from scratch, that we include in all our planning.

Finally, tell your speakers when you’re planning a WordCamp, or any kind of online event, or even in-person event when we’re allowed to again. Help them understand about providing text even after the event so subtitles can be done quicker.

Help them even commit to subtitling something of their own. With the auto captioning tools that now exist it doesn’t have to take a huge amount of time. Sometimes, where it takes time is because of unfamiliarity with where the speakers going, what they’re talking about, and the jargon.

Please come and join our campaign. You can make a difference for inclusion, diversity, and user experience. It will improve your organization’s messaging and reach. It will make it different. It may even help you one day.

Thank you. You can find me on @NonStopNewsUK on Twitter. If anyone, I know I have had a lot of questions that have come in on on twitter as well already, um I will make sure I answer those in the next few days. And I had a request for a couple of people who’d like to do an interactive session learning how to use Amara and we have got those planned, too. So please do contact me on Twitter on @NonStopNewsUK, on Slack on abhanonstopnewsuk and, we will connect you with those things.

Amanda: Okay, thank you so much Abha. That was wonderful. I’m so full of information. We really appreciate you sharing your expertise and breaking down the kinds of captions and subtitles, how essential they are to learning.

Um, there was a lot of conversation in the the YouTube stream chat, too, and we do have some questions, that we have less, a little less than 10 minutes to get to, so um if you don’t mind… Io the first question that i’m going to ask, I think the first question was, I think mostly answered in the chat, but they, their comment was: “It’s very interesting about non-English speaking countries having adopted captions at a greater rate. Which countries do you feel set the standard in terms of subtitles.”

Abha: The, the Spanish film industry has actually been really good at this, and there has been some working with the, with the US as well, which is it’s been very good in terms of how subtitles are presented. And, and, I think that has been a model that lots of other smaller broadcasting things have taken on. So, in terms of actual conferencing, the, where countries have a regulatory requirement to caption, it is worth having a look at the processes that they use, because they tend to be more tested, and they have tended to look at the various options and how they can do that. So, again America has had a lot of regulation on that. Um, some of the big charities have also done some, some work on that and I can share some links after this talk, as well.

Amanda: Okay, perfect, thank you.

Um, the next question is: “SEO and YouTube are hidden gems for building our online footprint, as you uh, mentioned, and are there, um, sorry are there rules for effective content for web pages, links, etc., for captioning, to help videos rank higher, or rank more?”

Abha: There’s a lot of work going on at the moment um in terms of readability of captions, and there’s some work that’s been pioneering in, in Switzerland about actually how algorithms can give a higher rating to captions that are readable. So, captions that come in huge blocks of text with no punctuation, would be, would have a lower, um reach. And so there are some testing that’s going on with a couple of the, the big providers, for how that can be be more promoted.

And you’ll notice that some automatic translation things, particularly YouTube, misses out punctuation. And say, when people are then watching the video back, actually the, the bots are not understanding what’s being said, because there’s no punctuation to help them. So, if you put punctuation in, that does increase the searchability of that, and the ranking.

Amanda: Okay, great, now that’s helpful, helpful to know. Okay, got a little more than five minutes. Um, make sure if you do have any other questions, um, to throw them in the chat. But the next question for you, Abha, of that, is: “What is your opinion in delaying audio and video signal at live events in order to give more time to live transcribers or AI tools to be correct with the subtitles or captions, and deliver them synchronized?”

Abha: I think it’s a really big problem. I think, just looking at today’s, I know my screen over here, which has been live, has been quite behind what I’ve been doing.

A lot of the, um, the software like OBS has a real issue with this, because it creates a delay between video and, um, and what you’re actually saying, which causes a difficulty for anybody who’s lip reading, because they’re lip reading something that is then out of context with what’s behind them, or what you’re actually trying to emphasize, so they could see an emphasis on something that is actually no longer being talked about.

So, um, I think that as events, and as we become more online events, we need to think about the, putting a delay in. But we also need to do some simpler things, like encouraging, um, text to be available. The, the captioners will work a lot quicker if they know what jargon is going to be used.

So, Word, WordCamp London, our live captioners are, um, very accurate. And one of the reasons for that accuracy is because they have the jargon, and the terminology, and, it’s quite often, the talk words from similar speakers, and that means that they don’t have to build in that extra delay for everything to catch up again. It still doesn’t avoid the internet delay that you will have, with video, but there are easy fixes most of the software, particularly the paid-for software, does actually allow you to compensate for that delay.

And I think there is an expectation now that, um, particularly in the commercial world, and maybe it needs to come to the, the open source, too, is that if you’re putting on an event, help train your speakers in how to do that. Because otherwise, you will have people who are, regardless of what you do in regards to the delay that you put in, their camera may still be adding in several seconds of the delay in addition.

Amanda: Right. Okay, okay, thank you for that response.

Questions on “The Case For Captions

  1. Q: Very interesting about non-english speaking countries have adopted captions at a greater rate. Which countries set the standard in terms subtitles?

  2. Q: SEO and Youtube is a hidden gem for building our online footprint. As there are rules for effective content for web pages (links, etc) for captioning to help videos rank more?

  3. Q: ​What is your opinion in delaying audio & video signal at live events in order to give more time to Live Transcribers or AI tools to be correct the subs or captions and deliver them synchcronized?