Creating Accessible Content with WordPress

Jerry Jones: Creating Accessible Content with WordPress

A perfectly coded website will still break down if the content isn’t written and built accessibly. Learn how to write content that is accessible to everyone, and how to quickly evaluate themes and plugins for basic accessibility.

View Jerry’s Presentation

Transcript

Jerry Jones: I’ll be talking about “Creating Accessible Content with WordPress.” And, yeah, ’cause you make great content so let’s make sure everyone can actually access it. Quickly, going over what we’re gonna talk about today. Talk about themes and–themes and accessibility and how they relate to content; quick tips about accessible content; the importance of headings and titles; images and especially writing good alt text which is a lot trickier than you would expect; about links and writing good links; some color.

The last session went so much into more detail than I will and it was great, but I just thought we’ll get a few quick points; and a little bit of cognitive accessibility; and the last part will be how to quickly evaluate plugins ’cause there’s, yeah, there’s so many plugins and themes about accessibility, and it’s hard to really, like, narrow down unless you look at the code, so there’s a few tips I can share on how to, like, get an idea of a site’s accessibility without having to look at the code.

And then, if we have time, we’ll evaluate some plugins and themes or websites together, so you can share a few themes or plugins, links to different websites, and I can use those tips that we talk about with the plugins. Oh, it’s–I know it’s a little odd to talk about content and starting with themes, though. But I wanted to make sure we had an understanding of what we are talking about in relation of themes to content, and how they, yeah, relate. So sometimes we get questions about, “I want an accessible, like, give me an accessible theme or I want a theme that’s accessible, where can I find one of those?”

But the problem is a theme on itself can’t really be accessible because they can’t be evaluated on accessibility because they don’t have content. Like, as much as myself as a designer too and other people might like to just go look at a theme, most people don’t really just go to a website to just look at the wrapper content or the wrapper of the page. You go for the actual content. And so when we talk about accessible themes, we say they’re “accessible ready,” because only a site with content can be accessible.

So a theme can be ready to be accessible as in if the content is accessible the theme, the wrapper, won’t get in the way of that. So with that out of the way, let’s dive into a little bit of the actual content. A few quick tips: little things make a big difference sometimes. So there’s a tweet in this–on this slide that says: “When using hashtags, if you capitalize every new word, blind Twitter users can actually hear the message you’re saying.” So instead of #, all lower case, “blacklivesmatter,” which still rocks but is less accessible, but capitalize–or the hashtag with capitalized “BlackLivesMatter,” with the capital from each word, so it’s more segmented, allows it to be accessible, even on Instagram, and yeah, you should capitalize multi-word hashtags.

When you use a screen reader and you are, like, for example, if you’re editing text with a screen reader, you come across that hashtag, blacklivesmatter, it’ll say it’s a misspelled word to you, if you’re in the editing mode. Whereas the capitalized ones, it will be able to–the screen reader can identify each of those words as separate and that they’re intended to be separate, and it will read correctly. As well as if there’s all lower case, it just assumes it’s one long string and will try to read it that way.

Similarly, for screen readers, we have this, like, emphasis meme or kind of format which is kind of a fun font visually with, like, clapping hands for emphasis but, unfortunately, when you view this content or access this content with a screen reader, which is an assistive technology, that will just read you the content screen and that blind users or low vision or actually I kind of enjoy using them from time to time as well, they’ll read this as “Don’t clapping hands use clapping hands lots clapping hands of clapping hands emojis clapping hands,” which doesn’t really have the same effect. So, in general, let’s try to avoid using this format.

And we’ll–captioning videos is really important, like we’re doing right now, because without those captions that is completely inaccessible and, similarly, providing transcripts for audio content which also was discussed earlier, and will be discussed in more detail in the future in some upcoming presentations. So, headings. Over here you have this “I’m a Heading,” little image that we’re gonna talk a little bit about what headings do for us and how they give content structure.

So on the right side of this slide, we have the page title is an H1, and then a little content. Second level heading, H2. Third level heading, H3. Another second level heading for a new section, H2. And these are all different, well, you can guess, headings and we have this H1, 2, 3, these numbers to designate the order–or not the order but the importance of, like, how that content is nested, which we’ll go over in a second. But I like to think of headings as, like, chapters in a book. They give the page structure and the book structure, so the page title would be like the book title. It’s the biggest, most important heading.

It defines what this content is, whereas underneath of that, a second level heading, H2, would be like a chapter within a book. It’s like, they’re important sections but the book title is the most important. And then, in H3, might be a little subsection within a chapter. So, and you have all the way down to 6, actually, that you can use, but often you really only use 1, 2, 3, sometimes 4, but 5 and 6 are there, by all means, if you want to go that far. But the important thing to remember too here, is to not choose a heading level, because it’s just by the size of the text that it is.

So, we don’t wanna use an H3 in a spot where we should use an H2 just because we want, like, smaller–a smaller break. So the semantics of the tag that each level is really important. And one of the reasons that this is important is screen readers can navigate by heading very easily so when you go to “end page” it’s kind of like skimming the content if you’re a sighted person you can, like, go to it, scroll down the page, look at some of the main sections and say, “Is this a page I want to be looking at?”

Similarly, if using a screen reader, you can press the “H” key, depending on the screen reader software you’re using, and it will–it’ll jump you to those headings and announce the heading level, so you get an idea of the overall structure of the page and where you’re at within that as well as if this is something I want to dig in on or what sections do I want to explore. And to illustrate what I mean by the semantic and the structure, let’s take, like, this presentation so far. It’s called, “Creating Accessible Content with WordPress.”

If we weren’t to give it any sort of semantic tag, that H1, it would just be at the same level of everything underneath of it. So next, we talked about “Themes and Accessibility,” some content considerations. We’re gonna talk about adding headings in WordPress next, and do you see how you get this structure of the page? It’s like, kind of table of contents effect is starting to form. So “Images and Alt Text,” and then H3 underneath of the images, “Select the Alt Text.” So even, just because of the way we marked it up by creating the right heading, we get a lot of meaning and structure from the page just by using those.

And so, in WordPress, just to quickly go over it in case you’re not sure, there’s a heading block. The page title itself is also a heading and you–and that’s always gonna be an H1 for pretty much everything. And so you won’t need to use H1. It’ll automatically be your page title. But you can add the heading block and then within the block toolbar you have the H2 up here in the top part of the toolbar, as well as in the sidebar, the heading settings. You have your H1, 2, 3, 4, 5, 6. You can choose the one you want, but my preference is using the little pound sign, number sign.

It’ll automatically, if you use two of them, it’ll automatically create the heading level 2 block for you, or three of them and hit space, it’ll automatically create a heading block at level 3, and so on. This is how I always do it. It’s just a really quick and easy way to create a heading block on the fly and– All right, so images and alt texts. On the right side of this slide we have a purple–the orange box with the little file missing sign and it says, “A purple slug on a red rose.”

That text there, if you’re a sighted person, you will–this is usually the only time you’ll interact with alt text is when a file or an image can’t be found or, like, the link is missing, it’ll show you that little symbol and show you the alt text in place of the image. But images, like, if you’re using a screen reader, then that alt text is what describes the image.

And a little tip, quickly: don’t use images of text. So I see this on Twitter all the time, same with Instagram, where people will post a big picture of text in order to get around the line limit and, as a result, unless you’re copying and pasting all that text and setting it as the alt text for that image, which you can do and should do if you are gonna do this, most of the time, people don’t.

And so it just says, like, the file name of the image; it doesn’t announce anything relevant. And so it’s entirely inaccessible. So all images in general need alt text. Well, it’s tricky. They almost always need alt text because some of them can be presentational as they kind of discussed in the last presentation that there are some images that don’t actually carry any meaning, and so we say that those are presentational-like.

Imagine, like, a graphic on the side that is just kind of like a decorative element. You don’t need alt text for those. But unless you’re, like, absolutely positive, I would just recommend going ahead and adding the alt text if you’re not sure; let’s just assume adding alt text is better unless you’re sure.

And when you’re writing–when you’re doing this to write content, it’s almost always going to be the right choice to add it. But writing alt text is actually kind of hard, in my opinion. I wanted to get better at it so I asked on Twitter, “What are your–” Like I asked people some– first, their favorite tips on writing good alt text. Said, “It’s sometimes hard for me to find the line between not enough and too much description,” and I got back what I felt like were some good responses that helped me kind of categorize how to write better alt text.

So some of them were to be descriptive but succinct, like, about one to three sentences max. Highlight the important details in the image that are relevant to the page content, and that’s the part that I had not considered before, is that it needs to be relevant to the page content, and it’s not just a standalone image if it’s within your content. There’s likely a reason it’s there, so try to, like, create that connection. I’d heard recently, too, about someone that tried to relate it to an image being like paragraph text.

Like, if the image wasn’t there, that text can be naturally inserted. And this is something I knew but I would still do from time to time, which is starting alt text with “An image of” or “A gif of,” and it’s not, like, you don’t need to do that. A screen reader already tells you that it’s an image when it reads alt text. Like, there is–there’s no reason to do that. But I would still catch myself trying to write it that way, for some reason.

And Carie Fisher did this good article for “Smashing Magazine” about the telephone and one of the things she recommended for writing an alt text was the telephone test. And this goes back to the first slide or the image, the broken image slide of the purple slug on a red rose.

So she said that imagine you’re on the telephone and–or you pick up a telephone to call your friend and, as soon as they answer, you just say, “A purple slug,” and hang up.

Well, if you do that, what image do you think your friend is gonna have in their head? For me, if someone did that, I would just have a purple slug in my head on, like, a white background. There wouldn’t be any, like, context. It would just be as this, like, floating purple slug. But if you compare that to just adding a little bit more context of picking up the phone and saying, “A purple slug on a red rose,” and hanging up, now in my head, for me in my kind of mind’s eye, I see the purple slug on the red rose and then the red rose is maybe in a field and so I get, like, the rest of the image filled in. It creates a fuller image for me.

And so those little details, those little extra things, can make a big difference. And so, let’s do a little practice of it. You can comment on YouTube if you want or feel free to just guess on your own and let’s select some alt text for this image here. I have a little story that I start–that says: “I was excited until I sat down and the small cart shook. ‘It will be worth it,’ I reassured myself.”

And then we have this image that we need the alt text for. There’s, you know, some mountains, the clouds, the little cable cars and it’s winter. We’ve got the alpine trees. It’s a nice image with the blue, with bluish cast, so let’s see. Option A: Image containing mountains, clouds, cable cars, trees. Blue and white with small houses. Photo taken from above. B: Mountains and tiny cable cars. And we’ll talk through this afterwards. C: Beautiful view of the mountains as we traveled across the land in cable cars, right before we passed through the clouds. Or D: Cable cars about to travel underneath the clouds, with large mountains in the background.

So option A, we have what is alt text that would be likely generated by, like, a stock photo library, when they run an image through, like, some AI that just kind of grabs all the details it can identify and shoves it into a text form. So, “Image containing mountains, clouds, cable cars, trees,” and all of that is just when you go to a stock photo library and look at the alt text, almost all of it is similar to this, where it’s just very generic and gives you all these little details but, for me, even though you have all the details, that, like, it doesn’t give me a great representation of the image. It feels like generic and kind of cold and so I would not use that as alt text.

B, “Mountains and tiny cable cars.” It’s a little short but it could totally–it could work but I feel like there’s a lot that is missing.

C, “Beautiful view of the mountains as we traveled across the land in cable cars, right before we passed through the clouds.” And this is tricky also ’cause if I didn’t write this, select the alt texting, I would have a hard time with these C and D options. But what I think C does well here is relates the content to the image. So it’s not just a standalone alt text, similar to D.

D is, like, a great alt text for this image, if it were just like in a slide gallery of images or in a stock photo library where it’s just the image outside of any context or content. So C, I would go with C ’cause it relates the text, the content, to the image better. And so you have D: “Cable cars about to travel underneath the clouds, with large mountains in the background.”

It’s still good alt text but maybe not the appropriate one within the context of this story. And then so how do we actually add alt text into WordPress? Is in the media library, in the alternative text field. That would actually be a great place to add that D option where it was, like, the standalone alt text that just described the image within, like, a gallery, because when you add it in the media library for WordPress, they’re actually in relation to the existing content in that sense.

But then, when you’re in the block editor and you add an image in, that is where you would give the option C alt text where you have the context of it within the context of the story or the content. So that would be a good place to put option C. And something to keep in mind or remember is that when you add it to the image settings, the alt text there, it won’t save that to your media library.

That alt text within the image settings there, within the block editor, only saves it in relation to the image in the content. And the reason for that is, it’s only–you can add that image again later on in a story or in another post, and the context could be different. And as we’ve discussed, it’s important to have the alt text be in relation to the content. And then–but if you add it via the media library, that’s when it would get used in the gallery, or if you don’t override here.

So I know that’s a little complicated but it’s just something to try to keep in mind if you’re not sure why your alt text is showing up or it’s showing up differently, things like that. So that was a bit about images and now we can go to links and, like, maybe that’s hyperlink, things like that.

Something–and I see people do use links inappropriately all the time. So some of these are, like, they’re very little things, a lot of accessibility too when you’re creating content is just doing the little things right can make a huge difference. So, one thing: don’t open links in a new tab, please. In general, anything that you code that takes away control from the user is a bad thing.

So, you know, when you go to a website, if you want to open something in a new tab, you can, you know, can click or right click and select “Open a new tab.” Those are choices that the user can make and when you code it so that it automatically opens in a new tab, you’re taking that control away from someone, which is not ideal. And almost all the time the arguments–I’m sure there is some valid argument for opening links in a new tab but, generally, when I hear them, they’re more focused on the business side of it than they are about the user preference.

And the other issues with opening links in a new tab is that it takes you out of the context that you were in and so now your, like, back button is broken. If you didn’t notice it open in a new tab, you’re not sure where you are, you might have been– yeah, there’s a lot of things that can go wrong because of that, so unless you’re absolutely sure that it’s a better accessibility to have it open a new tab, you should not do it.

Like the last talk described, you need something more than just, like, the color to offset your linked text so, in general, using underlines, you could do a background but you need to make sure it has sufficient contrast to what’s around it and some other element to help identify is preferred. And then, this is a big one, is to, again, you’ll see this everywhere, I’m sure.

We’ve all written linked text like this, but avoid generic phrases like “click here,” or “read more” because linked text should describe the link and not rely on the surrounding text for context. So a link should let you know where it’s going to take you just by having the linked text. That’s the ideal.

And these, again, they’re little things that you can do to make a big difference. And why I say that is this is the screen reader rota for voice over, is if you’re on a site that just uses, “click here,” you can navigate by links and see that structure. It’s just gonna say, “click here,” on everything. It takes those links out of context and there is no relation. But imagine instead someone used those links as “Buy now,” or “Upcoming Events,” or “All of our available properties.” You don’t even–with these things, you don’t even know, like, what site you’re necessarily on but you can get an idea of what’s gonna happen when you click that just from the link itself, from the text itself.

Colors, the last presentation did a great job of going over all the details of that, so I won’t spend too much time here. But the things–a few basic things to keep in mind are color contrast, and it says on the slide, “Can people read the text,” and the text kind of fades out into that blue that matches the blue background.

It makes it really hard to read. So we’re going for a minimum contrast color checkers, if you google down, they’re pretty easy to come across. Contrast ratio, the minimum that you would wanna shoot for is a 4.5 of contrast between the foreground and background, and then, ideally, you’re gonna get closer to 7:1, especially for body text.

Other, like, UI elements can be closer to 4.5, but it’s something to remember. As well as the last slide–last presentation mentioned this: to not rely on color only. And here’s an example of that. If you have–we have a form here that says “First Name” and has a red check and then “Last Name,” and has my last name, “Jones,” in it with a green check. And that has the “No Color Blindness” presentation.

And then if you look at the same thing with a red blindness, it–both those check marks start to become a lot more similar and there’s not like a strong, like, difference between them. And I used the check mark here because people can sometimes use that for both positive correct and the negative incorrect. But you can fix things like this by adding some text to describe it, ’cause similarly, icons shouldn’t be standalone. There should always be some text that supports that. So in this instance, for the empty field, we just say, “Your first name is required.”

And you can see on both those, with color blindness and with no color blindness, you get that text to help support and actually tell you it’s wrong, which makes it accessible for–way more accessible for everyone. Just quickly, some cognitive accessibility tips, is to make things easier to understand. And ways you can do that are if you have, like, a long paragraph that has a few points in it, break that up into bulleted lists.

You can also use shorter sentences and smaller words. If–highlighting or bolding the important pieces of text is super useful. Also useful for skimming so you can kind of like get the idea, take things in of a larger article and see the important parts. Don’t ever use a timer because, one, it puts pressure on people unnecessarily and, two, like, there’s no reason or there’s rarely a reason, I guess, that you would need to, like, limit someone’s access to the content. It just, yeah, it just creates something that could be accessible and makes it inaccessible just by adding a timer. So please don’t use them and don’t autoplay things.

I don’t–this is another thing that, like, that you’re taking control away from the user and is only that–the only argument I hear for them is from the business side of it. It’s not from, like, a person saying, “I prefer everything being autoplayed.” You know, you go to–we’ve all been to a website that has–it start autoplaying and then you don’t know how to turn it off. So many things that can go wrong with it. So please, don’t autoplay things.

Some–so about evaluating for accessibility. This is going back to the beginning when we asked for some plugins or themes. How I go over them, ’cause oftentimes I’d be in a situation where I would need to evaluate, like, you know, 20 themes or 20 plugins to say–to narrow down, to be, like, which ones of these should we even start looking at, like? And looking at accessibility and evaluating it for accessibility first is such an easy way to eliminate I’d say 90-plus percent of plugins or themes.

If you go to it, and just test it with a few things, they’ll–you will narrow down your pool dramatically, very quickly. So I call this the five-minute accessibility audit. I test the keyboard interactions using tab and arrow keys. I zoom in and out on it to see if it breaks the functionality. See if they’ve labeled inputs–and inputs here are, like, form fields. And we’ll talk about all these in a minute. And see how they use color. Do they use color appropriately. Contrast as well as the–if they used color to signify a meaning without the supporting text.

So, first, we have our keyboard navigation. I have this Twitter navigation here. If you press “Tab” on it, it will–you should see that focus ring. And that focus ring tells you where you are on the page as well and it should be very visible, like, easy to see and it should allow you to move around just using your keyboard. And, as a sighted keyboard person, like, using it, you would just use tab most of the time and then if you’re in, like, a radio group or, like, a dropdown you might use up and down arrows or right and left, so just because something, if you reach something and you’re not sure how to interact with it, try using the arrow keys and see if that help– if you get somewhere else.

Zoom in and out of things. When you zoom in and out, it should increase the size of the text and reflow content if necessary, so this is an example of website pressing command plus on a Mac and it zooms in and you can see the header, like, reflows to accommodate the larger text.

Labeling inputs. We see here we have this form that has the email, confirm email, password, and you see the inside of the box, labels, we need to describe the field but these inside of the input field are actually placeholders. They’re not labels. Because placeholders, they’re not actually necessary but if you use them, they are examples of the text. So instead of email there, it should say, like, example@gmail.com.

A placeholder is supposed to be an example of the content to be filled in there. But the bigger issue here is that once you click into that field or you start typing that field, your label is now gone. A much better way to do it is to have the label coded correctly and outside of that input. That way, when you’re typing into it, you can still see the label appropriately. And to demonstrate why, see if you can remember what this field was for. Now, there’s text in it. The label has gone. And if you happen to remember, it was the name field: “What should we call you?” So it’s really important to have those labels visible.

So now would be a time if anyone has shared something we can do a quick screen share and we can go to that website and just run over those few things, like, the keyboard navigation looking for inputs in the zooming, anything like that. And if not, then we can also just go over some questions in general. But that’s the end of what I have prepared.

Ahmed: Hello, Jerry. Thank you for the presentation. I’ve learned a lot of small thing that can make a big impact. I believe our users can take away that benefit and why also they can understand the reason why it is important. So we have a bunch of question and we still have some time to address them, so let’s just go ahead with them. I’ll get–with the first question for you. So for alternative text, you spoke of images, of block of texts needing alternative text that is the same text. When is that too many words? Is there a limit defined by WCAG for number of words in alternate text?

Jerry: I don’t think there’s a limit. If it is, it’s very, very large. But I don’t–I’m not aware of a limit but you could definitely, yeah, look at the WCAG or, actually, I’m not sure it would be in–I think it would be in the HTML spec for that, but if you are reaching a maximum, it’s probably not great alt text, is my guess. Unless you’re, like, like the Twitter image thing, unless you’re, like, trying to recreate a large thing and you’re limited by the platform, such as Instagram or Twitter, if you’re doing it on your own website it would be better to have that image just provided as content.

Ahmed: All right, great suggestion. Next question: “What are the best practices when it comes to alternate text and data-heavy charts?”

Jerry: In relation to– can you repeat that one?

Ahmed: Sure, so, “What are the best practices when it comes to alternative text and data-heavy charts? Should you just summarize the main takeaway or include all the details? For example, bar chart comparing results.”

Jerry: With graphics like that, it can get tricky there. You would definitely wanna provide all the detail in a accessible format, but sometimes it depends on the control that you have available. So, like, if you were able to mark that up in a more interactive way that could be accessible that was not an image, that could be better. But would also be outside of the scope, I guess, of this talk. But if you were limited to just having an image and you were–and that was your only option is just use that image, then you would probably wanna have all the details, yeah. But without seeing exactly what it is, it can be hard. There’s a lot of, like, subjectivity to it, in a way, but in general if you’re providing content, it needs to be–all of it needs to be available, yeah.

Ahmed: Absolutely, so what would be your one advice for writing good alternative text?

Jerry: I think the ones I broke it down into are, like, limiting it to one to three plus the relation to the content are my, like, the things that, I guess for me, flip the switches to, like, how to write good alt text but, yeah, I’d also, like I mentioned in the presentation, I’d recently heard someone describe it as a paragraph and I don’t think I explained that well when I discussed it.

But if you look up, I think Dave Rupert is his name and he wrote one about how he has reframed it in his head about how an image within context should make sense if you took the image out and just put the alt text in there, that should make sense within the flow of that document. So he started describing it to himself as just paragraph text. Why is it there and what is it doing down beneath his content. And if the answer is it’s not doing anything for the content, then likely the image either doesn’t need to be there, like, it’s not helping anyone in a way, it’s just like taking up space, so it’s not as, like, focused, good content, or it could be that the image is presentational.

Ahmed: Okay, thank you. So the next question, it’s kind of dramatic, so the question is, “As a member of Automattic, what can we expect in the near and distant future in terms of seeing those changes to improve the accessibility experience? Do you wanna share some insight thoughts?”

Jerry: I am not positive. It’s, you know, it’s definitely a large organization. I’ve been working more actively on accessibility with it and making sure that anything that comes through as part of my team gets, like, I’m very attentive to the accessibility of it. Doesn’t mean I always get it right but I can assure you that I’m doing my best as well as, I guess, just the general intention of the company.

People are very, very supportive of accessibility. And it may not always come out in terms of releasing fully accessible things all the time, but I can say that the intention is there in that every–like, I have not had to push back to be, like, “We need to make this accessible,” and someone be like, “Are you sure?” Like, it’s always been, like, “Oh, yeah, definitely. Like, how do we do that?” It’s–I’ve never had to have a discussion with someone about justifying that accessibility should be there.

It’s 100% supported by everyone there. And–but things move fast and not everyone is an accessibility expert and so it is hard finding that balance, yeah. Like just recently, right now, I put in a review for a combo box element which is a search field related to a dropdown where, like, you can think of, like, an autocomplete dropdown, and I put in a review of something that they–that I was, like, “Hey, this isn’t great direction.

Like, there’s quite a few reasons why this isn’t as fully accessible as it should be,” and it’s–that was maybe a month or two ago and they’ve since, like, totally rewritten it and gone a different direction that should be way more accessible. So, yeah, I hope that helps to answer that the intention is there and support’s there, but we’re also–a lot of people are human as we know and don’t always get things right.

Ahmed: I’m sure that information will pass. The next two question is a bit more technical so the next one is about Gutenberg and Block Editor. So the question is: “Does Gutenberg or Block Editor always pull in the alternative text that has been entered in the media library?” And also, “Does the Editor permit overwriting that already entered alternative text on local post?”

Jerry: So if I recall it, so it’s a question about the media library when you insert the image into the Block Editor, it pulls the text from the media library, if it’s there, and fills it into the Block Editor as, like, your kind of default. But then when you edit that text in the Block Editor, it won’t–and save the post. It won’t take what you’ve entered and save it to the media library.

I believe that’s how it works. But please double check that for yourself. But I believe any text in the Block Editor does not get saved to the media library. That’s–those are separate for those reasons I outlined before that, like, yeah, that they need–they need a way of being distinct from each other because of the context of the image within that flow, yeah. But go ahead and test it out and make sure I’m right on that.

Ahmed: All right. The next one sounds interesting. “Where can we find accessible-ready themes?”

Jerry: I believe just the WordPress.org, like, plugins repository search, I believe, has an accessible-ready, like, label. But that would just be for free themes. Otherwise, I would say if you’re looking at a theme outside of that, to go ahead and, yeah, use those kinds and you don’t know, like, the underlying code and a lot of the little details that those tips that I kind of pointed out on evaluating things, can really go a long way toward isolating, like, if you started out, like I said, with 20, you’ll probably end up with fewer than 5 that have really considered it and gotten it and done it well, yeah.

Ahmed: Thank you. The next one says: “Why is it not good to open external links in a new tab, especially the social media links?” Any thoughts?

Jerry: When you say “social media links,” is it like if you’re on your website and you have, like, a few different social media links to open. I think I would say the same things I went over which would be you’re taking control away from the user, so if I go to your website and I see the social media links, I can choose to open that new tab if I wish to do so or maybe I’m ready just to go on.

That, yeah, that when you are making an assumption, a lot of accessibility issues arise from making an assumption about what the user wants, because the user is not one person. We are very diverse in abilities and disabilities and what’s right for one group, a person even within a group, can vary widely and so the best thing to do in general is to just give the pattern and let that person decide for themselves what is best for them, and how they want to interact with something. So that might be a little bit more, like, accessible, philosophical, about just opening links in a new tab, but I think it’s an important illustration of the larger scope of that.

But more practically, like, specific reasons, again, would be like if you are using a screen reader and a link opens in a new tab, it’s very easy to miss, like, it’s very easy to just not hear that you’re in a new context and so, say you clicked that social media link, it opens in a new tab or new window. It’s harder to get back to that window if you’ve missed that you’re not in this new context. And so, I can’t just use the back button anymore to quickly go back because your history is within your browser, the current link browser tab, and when you open a new tab, that’s– there’s no more history. There’s a disconnect there. So that’s another reason I think it’s useful to not open things in a new tab.

Ahmed: Absolutely, Jerry, so we have only a few more questions to go, so the next one is related to the e-commerce industry. “FOMO, Fear Of Missing Out, is a famous or infamous technique used in the e-commerce industry where a counter is used to increase the sale. Does it make a site inaccessible according to your thoughts?”

Jerry: Yeah, I think it would. Yeah, there’s–and again, like, it’s similar to what I discussed before, is like, those are business decisions, not considering the person behind that, and, like, yeah, I’m not sure what user-focused decision would make a site more accessible by having a timer on it. I can only think of things that would be on, like, the business side of why they would want that.

Ahmed: Right, thank you. The last question is about the news broadcasting agents. “So, the majority of the websites for news broadcasting agencies or media agencies don’t really offer much accessibility options. Why is this the case? Would you agree?”

Jerry: I’m not sure I feel qualified to talk about it. I guess I would maybe need a specific site or something to evaluate but I’m not sure about a larger media organization and how that relates to accessibility, unfortunately.

Ahmed: All right, so Jerry, thank you so much for being patient and answering so many of the questions. We do see our audience is paying close attention and they have the eagerness to learn in depth from you, pick your brains at topics of interest, so I would like to once again thank you so much for being with us.

Accessibility by Default in Authoring Tools

Susanna Laurin: Accessibility by Default in Authoring Tools

This presentation will describe ongoing research in built-in accessibility by default in authoring tools with the goal of disrupting the market. It will inspire developers and designers by showing what is possible to achieve when providing accessibility support and teach content creators and website owners what to require from their suppliers. More than 50% of accessibility fails are created by web authors. Most of them have limited experience in accessibility.

With increasing accessibility regulations in Europe, hundreds of thousands of web authors need training. There is no chance the market can meet that training demand. But what if authoring tools could provide built-in accessibility by default? In an EU-funded research project called WE4Authors Cluster, a consortium of some of the most used authoring tools in public sector in Europe (Drupal, Plone, Joomla, SiteVision and Tiny MCE) are working together to provide better accessibility support for web authors, lead by market leading accessibility consultancy Funka.

In the project, accessibility features are prototyped and tested with web authors, to prioritise and agree on best practice to help content creators publish accessible content. The results will be shared with the whole community.

Watch Susanna’s Presentation

https://youtu.be/85K8ux278eA

Transcript

Well, um, are we live?

Hello everyone! Welcome to the WordPress Accessibility Day 2020. My name is Roberto Remedios.

I’m, I’m user experience, user interface designer. I live in San Jose, Costa Rica I als, I’m also an accessibility advocate. I’ve been working on Latin America with accessibility with different groups of people – deaf people, blind people and um, I mean, I love to share with all you guys tonight.

I forgot to do thanks on the last talk uh to our sponsors, to to our organizers and to my moderator, Kevin who is pasting the the questions that you guys had on YouTube into the page. Please, remember if you have any questions, just put the questions on the YouTube chat.
Uh, our next speaker is Susanna Laurin and she will be talking about accessibility by default
in authoring tools. She is chief research and Innovation Innovation officer at Funka. She have more than 20 years of experience in working with uh accessibility at the senior management level, she is an internationally expert on the European Union accessibility policy and regulations and she have did several workshops and write some books about the specific EN standard.

So, you have the control right now so sign up thank you for joining us, uh, you can share your screen.

Susanna: Okay. Thank you, Roberto! I think I’m now sharing my screen and hopefully I also have my microphone on.

Is this working?

You can nod, at least.

Roberto: Yes, it’s working.

Susanna: Okay, perfect thank you. So good morning, everyone! I’m doing this from Sweden, Europe, so it’s very early Saturday morning and I want to tell you straight away that I’m not a developer. So if you have technical questions around my presentation I would be happy to forward them to our developers.

I can’t get them to wake up this early in the morning on a Saturday, so unfortunately you will have to, um, to live just with me but thank you for having me on this Marathon accessibility event.

It’s exciting. So, I can’t make this work, why, yeah.

Yeah, so just a little bit about me. I am the chief research and Innovation officer at Funka, I’m also the representative of the UN initiative G3ICT and the International Association of Accessibility Professionals representative to the EU. I do a lot of strategic consulting these days mainly for the European Union but also for um EU member states, national governments, when they are transposing their the EU legislation into national law.

Um, I did, I’m leading the Radix subgroup which is an expert group helping the commission and the member states implementing the web accessibility directive, one of the recent legislations in this area, and I’m also one of the technical experts in the ETSI special task force 536. And we have been responsible for updating and harmonizing the EN standard which is the minimum requirements of the legislation in in Europe.

Um, just a couple of words on the, on the company I work for – Funka – we are specialists in accessibility, uh, but we were founded by the disability movement in Sweden and we um, we started as an NGO, really but we converted into a private company in the year of 2000. We have our headquarters in Stockholm, and and offices also in Oslo, and Madrid, and Brussels.

We do consulting and development on accessibility and usability, we do also a lot of research and innovation and that’s nowadays the department that I lead, and so we do national projects and also European and Global projects in research and innovation. And we also cover policy and and different kind of studies and investigations and we are of course also engaged in standardization, which is extremely important if you want to do accessibility. Also one of the proud founders of the International Association of Accessibility Professionals.

So, in Europe, the web accessibility directive is really turning everything upside down here.

We have been living in a world where we have more recommendations on accessibility than actual regulations, and this is now changing. We started with a procurement directive a couple of years ago, now the web accessibility directive is covering public sector, and in 2022 we will also have the European Accessibility Act which will cover products and services in some of the private sectors.

We are really moving from recommendations to, to regulations. And it’s an interesting time to be alive, here, but we have a couple of problems when when this is happening, of course, and um, and one of the biggest problems that we have, um tried to to solve or looked into at least, is that really, really that accessibility is much more than a technical issue, of course, and around 50 percent of the accessibility problems or fails that we encounter when we do audits – they are created by web authors.

And this is not a surprise, because many web authors – and this is now in in public sector mainly – and most of the web authors are not professional communication people, and absolute vast majority of them are not accessibility experts. So, of course they do create content that is not perfectly accessible.

And, and this means that we now have approximately 7 million interfaces that are supposed to, to comply with the new regulations and with seven million interfaces we have, I don’t know I can’t even count the number of web authors that would need training to, to make this work. And of course that’s good for us, because we are specializing in this, and we’re happy to provide training, but it just won’t scale.

It’s not enough just to put out manuals and training and, and you know trying to make all these web authors do the right thing, because it, people change their job, they, um they get on [parent leave, parental leave or or you know, we change people all the time. So we just need to do something else than only training, because that, that is not, will never be enough. And that’s when, when the idea came that – what if accessibility could be built into the websites from the start? What if the authoring tools could do something to help, here?

Um, maybe everything can’t be automized, but at least the authoring tools should have a possibility to give much better support to the web authors. That was sort of the idea behind this, this research projects.

And we managed to lobby for this idea and make more people interested in it, and after a while the European commission came out with a, with a call for, um, for proposals on trying to see if authoring tools could actually, uh, be part of the solution here.

So, um, the European funding for research, and very common way of doing this, is that you first, you apply for a pilot project, where you sort of prove that your idea or thesis is valid, and that there is something here to, to keep um, looking into, and then, if you’re successful in your pilot, then you can also have the possibility to make what is called a preparatory action. Which is where you actually perform the real research or, or whatever initiative is needed to be done. So it’s a two-step process, often.

And, um we did get the opportunity to have a, to perform a pilot project a couple of years ago ,
where we looked at the 30 most used tools in the public sector bodies in in the EU. So we, we crawled the internet for public sector bodies and, and found the 30 most used tools, to make sure that we, um, the efforts were made on the most frequently used ones. And we did some piloting and testing, so trying to see what happens if you, if you’re not an expert in accessibility, and you’re procuring this tool, and maybe your, your supplier is also not an expert in accessibility, because that is the normal case – that is the majority of these situations.

What happens then? If you just acquire an authoring tool and a website for somebody, from somebody that is not an expert, how accessible is it then? It’s like, I know this is really theoretic, because it’s not really possible, but the idea is if you just sort of push the button and up comes a website from your package then how accessible is it by default. What is built-in there. And the answer was: not much. But we did a lot of surveys and prototypes, we talked to, uh to suppliers and, and vendors and tool makers, and so on, try to figure out how much is really included in the, sort of, basic, the standard templates, and so on.

And we did a lot of analysis and also user testing, of course. User testing both with web authors, and also end users with, with disabilities to, to make sure that the results were valid from both perspectives. And we did also work a lot with different stakeholders in, and that this was before the Corona so we, we could make, um physical workshops in Brussels. We had specific, uh webs, workshops with web authors from public sector, to see what are the user needs; how can the authoring tools help these web authors to, to make the right thing. It should be easy to, to do the right thing when you create content. And we also discussed with the end users with disabilities of course, which are the most common problems or fails, or what is the biggest barrier to you.

And also looking into the, the back office, so to speak, so if if the Web author has a disability, how accessible is the actual interface that you meet as, as author. So the, the back end, so to speak, the input part of the of the authoring tools. And then we also discussed with the tool makers and other vendors, suppliers, and also standardization bodies. So we had a series of workshops where we, where we really collected user needs and experiences from this for different stakeholder groups.

And the result from this pilot was that, well, there is not much accessibility by default out there in the most used tool, unfortunately, but they’re big demand. And also we thought that there would be, um well, a good potential to do something. If something is not perfect, then you have the the potential to, to make it better. So, the result of the pilot was really a matrix of, of the most used tools. And also we made three sets of guidelines for web authors, or people who are procuring or looking for a tool, and also for end users with disabilities, and for the industry. And really this was then the starting point for further research.

And the result of this um, on the part, the open parts and not everything is, is public, but the public parts of this project results can be found on our website. It’s www.funka, which is f-u-n-k-a.com slash we4authors. So “we” and the number 4, and “authors.”

And then, of course, the next phase started and we were happy enough to, to get also the second phase funded by the commission. And we are now, uh a third into, um the preparatory action which is called “we4authors cluster” and here we, we work with a cluster of tool makers, because we think we need now to dig deep into the the real code and the real, um potential of this. So, based on the user needs that we found in the pilots, and we are going to create new features that can be adapted into whatever tool you use.

We are going to do extensive user testing to make sure that the features that we create are really good – they’re user friendly, they are solving the right problems, and that the web authors like them. Because otherwise they won’t be a success. And we are then going to try to implement these features in different kinds of tools, so that we make sure that we can reach as big part of the market as possible. And, and we are then, when the project is finished, by July next year, we are going to share code, if there, if we are resulting in code, which I hope, or, if we can’t produce the code, then at least we will share the documentation, and prototypes, videos, visits, or whatever it can be from the user testing, to show as inspiration for others.

Because maybe everything can’t just be a snippet of code, and then you can just implement it quickly into whatever tool, maybe life is not that easy, always, so we will also describe the feature, and describe how it works, and what people liked with it. And so that then, the rest of the market can, hopefully, use those ideas and for inspiration, and also implement these ideas in their own tools.

And this is really an important part of of, of all European research, of course, that, that the result is going to be for the common good. So this is not something we, or the partners, will, will keep um, behind any closed doors – the idea is, is really to share with the community.

And the members of this cluster, right now, are Drupal, um, we have um Mike Gifford of Open Concept who is representing Drupal, here. We have Plone, represented by Timo Stollenwerk from kitconcept, in Germany. We have SiteVision, a Swedish licensed, um licensed-based authoring tool. We have Joomla, um, and Brian Teeman is the representative for Joomla, and the foundation of Joomla in Germany, it’s also a part of the project. And we have Umbraco, represented by Sigma in the UK. So, these are the cluster members who are representing different kind of, of authoring tools. And we are, of course, also working with other tool providers, and, but we are also closely related to the International Association of Accessibility Professionals, so that we can reach out to both vendors, and also, um, customers using these tools, and the end user organizations.

So, with IAAP we, we have a really broad outreach, apart from, from the partners own networks, so to speak, and we also work closely with ERRIN, which is the European Regions Research and Innovation Network. So, in with this network we can also make sure that we reach a lot of the public sector bodies’ web authors in, in Europe for user testing and, and also making sure that the user requirements are, are met in a really, uh, relevant way for them.

After the discussions with, with the stakeholders and the different user groups, and specifically, of course, the web authors, their needs, and also the ideas and experiences from our cluster members, so, to make sure that whatever we do is, is sort of relevant, and possible to use also for the tool makers, we have, we set up a set of criteria for, for how the features would; what kind of um, what kind of problems we wanted to, to solve, how frequently this um, problem was occurring, and so on.

And then, based on these criteria we selected 10 different features that we wanted to try with users. So now we are, specifically, in the position in the, in the project where we are, we have already selected 10 features, and we are now going to user test them, starting in October. um so we are, right now, finishing or refining the prototypes, that we are then going to, to test in an iterative process, of course, inviting users, web authors, and others interested to test the prototypes and come with, come back to us with feedback, so that we can refine them, and make them better and better. So, we have until Christmas to do this, really, but we’re starting here in October. So the features that we have, um, selected to try and make sure that we can support users, web authors with, is really how to provide alt text -alternative text -on, on images.

It’s a very basic thing but we have seen that this is solved in different ways, in different authoring tools, and sometimes it’s quite hard to make it in a good way, and sometimes it’s, well, you have the possibility to add the alt text, but you don’t get any support on how to write it, or when to write it on or when not, and so on. So the whole idea here is to provide more support. If something can be, maybe, mandatory we will try to make it mandatory. Alt texts can’t really be mandatory, because not all images need them, and so on, but we can also prompt for the Web author to do something, so it’s really visible, it’s really um, you’re sort of pushed to do the right thing. You shouldn’t be, um, you shouldn’t have to look for the place where to write the alt text. It should be very obvious and sort of “in the flow” of your authoring process.

And then the, we can also provide, maybe, support in different, at different levels, or information. So we have all this sort of the levels of, of automation we can do – we can either build it in, so that it’s really, by default, making it accessible; or we can prompt it, or we can make information, or, or support in in other ways. So we will try different ways of doing this, and see what the web authors find most useful, and then present that to, to the market, when we have a good decision there.

So alt text is the first one, and then we are also prototyping now the possibility to change language, um, when you have a multiple language on a, on a website or a page, which is very common in, in the European context. We often have websites with, with more than one language. Also, we’re looking at documentation of the accessibility features, because our research has shown that many web authors don’t really know what their authoring tool can do. So how can we make the documentation better, and also, how can we make it, or we want to try, if it’s better to have the documentation – the information -that support in a wizards, sort of “in context,” so where you are actually performing, doing something, that is when, where you can find the information.

Or, if that may be disturbing for you, because the user experience will be then, maybe, too crowded, or too overwhelming with all this information. So, maybe it’s better to have it sort of on the side, that you need to go look for it. And we have we have seen different situations, different contexts, and also different Web author groups who prefer one or the other, so this will be really interesting to do AB testing on. Um, of course some of the really big troublemakers in authoring is tables and forms, so we are going to test supportive tables creators and also forms editors in different way, that can um, a combination of making it accessible by default, and supporting on when you do more complex tables and forms. Of course, everything can’t be automated, or at least not now, um, maybe in the future, but we want to to make sure that the author has better support, uh, in doing the right thing.

When it comes to video, we are testing supporting the editor on how to make sure that the video is compliant with the regulations. And then we have four features that are sort of connected to each other, and they all have to do with testing. So we want to try out how to best test accessibility while, or when you are editing or publishing something. So we are trying to have testing procedures built into the editor, so while you’re actually creating the content. And then another specific sort of easy win if, if we support web authors in, in providing accessible documents. Especially, specifically PDF documents which is, all over the world, I think we have unaccessible or inaccessible um documents, and that is a a big issue that many web authors tell us they would like to have support in. So, some kind of testing before you, or while you’re uploading your documents, so that you know if the document is okay or not. And then we want to test the full page.

So when you have um, provided all the images, and the objects, and the content that you are going to to publish,
then during this content creative, creation phase, you could also test all the different parts and objects that you are, um, going to, to publish. So before publishing just check if it’s accessible or not, and then, of course, helping you to remediate this will be an important part.

And maybe not specifically for the Web author, but for the website owner, or the person who is responsible for the whole, accessibility, um, the accessibility of the whole website, then of course there could also be a built-in testing tool for the whole website. And please note that we are not building a new, eh, automatic tools for accessibility testing. There are many out there and some of them are really good. So we are not going to test if they are good or bad, it’s just the how can they be combined, or included, or implemented, in the authoring tool or in the content creation phase, so that it supports the, the web authors in a better way.

Because what we see is that many of the testing tools are quite technical. And they may be very helpful for developers, but the web authors are sometimes not very technical, so it can’t be too, um, sort of, success criteria related and really technical in the way it’s presented. It needs to, to support you in, in another way. And we think combining it into the content creation phase could be really helpful for at least some other web authors. So these are the 10 features that we are now prototyping, and going to test, and these are going to be test like generic tests, and so that we have made them anonymous, so to speak, so we are trying to just test the feature it, and the prototypes are not supposed to look like any specific tool, but it wouldn’t, it shouldn’t be important which tool you’re used to use, but you should be, sort of, recognizing the, the content creation, how it looks when you, when you edit content.

That is the, the idea. So, we’re doing, um, this testing on a prototype level, and then when we, when we have learned more or less what the web authors need or or want, then we’re also going to do test implementations in the cluster members. So all the authoring tools that are working with us in this project also have the possibility to, to test the, these features in their own, um, in the technical environments, so to speak. Of course it takes time to implement these things in authoring tools and, and maybe we won’t be finishing this before the project is finished, but at least we want to make sure that we have done some test implementations, so that we can also say that we know that these, these features work in at least some of the technical environments that we have tested, because that makes it more probable that they will also be possible to use for other tool makers.

So the test implementation in the specific tools is also an important part. But what we are trying, doing right now is first of all the generic testing. And because of the pandemic we will do, we will do some face-to-face, um, physical testing at our offices, but most of the testing, so, the quantitative part of it will be online. So it’s actually open for anyone that would like to, to test these features and we are happy to welcome anyone. And we’ll come back to that. So, um, will this solve all the accessibility problems of the world? No, it won’t. But we believe that built-in accessibility by default in authoring tools could be an important part of the solution. If we succeed in this. Um, because, I mean, this is kind of obvious, but if you, if the authoring tool is supporting you in your content creation phase, then you can avoid unnecessary mistakes.

And this is really the primary goal here. Um, accessibility by default can make it. We can, we can start with the hygiene factors that the authoring tools shouldn’t, sort of, create accessibility problems, but it should also support the author in not doing that mistakes. Another important part here is really to help the non-experts to get it right from the start. Because, as I mentioned before, most of the web authors in public sector are, are not specializing in communication, and definitely not specializing in accessibility.

So we also hope that this can mean that the governments can create framework contracts, so in the centralized procurement that is very use, much in use in, in some of the European member states, which can mean that tool makers that do provide accessibility by default using our features or, or other services, but, but some, in some way can prove that they, they do provide good support for web authors, they can then have a, um, an advantage in the competition, by being sort of checked on beforehand by, by the central procurement system.

So that is what we’re hoping. That a group of tools will be, sort of, the front runners here, and they will then have the possibility to easier at least sell their services to public sector.

And with that, experts in accessibility, and also experts in, in tool making can, and the web authors themselves can, focus on more interesting things, or more complex issues, and sort of solving the rest of the problems that we can’t solve in this problem. But that is the, that is the aim of these projects. So, I would like to invite you all to make contact to us and if you are like to, or if you, one of your clients or customers would like to try, if you have any web authors, if you know web authors that would like, are interested in trying out these features, then please do contact us.

There is room for everyone who wants to, to contribute to this, and of course the more test persons we have, from the more parts most part of the world, and with different backgrounds and so on, the better it will be. So the results will really depend on, on the users who want to test. We already have, um, quite a lot of testers. I think, at least, over 100 organizations have, have said that they would like to to join the testing phase, but we are always welcoming more. So please do contact us.

You can use my email, which is Susanna s-u-s-a-double n-a at funka.com. And we also have a project website. It’s not very active yet, because we don’t have so much, um, results yet to show, but we will be more active in this website soon, I hope. So you’re also welcome to follow the project on accessibilitycluster.com.

And with that, I’m open for questions.

Roberto: Um, well thank you very much Susanna. We don’t have a lot of questions, we have one main question at this moment. Is, is the next one. You mentioned you will develop new features, uh, do you have examples of what kind of features are good candidates?

Susanna: Well, before we started testing them we don’t know which ones are the best candidates. That’s really is the test – it’s the 10 features that I talked about, and showed the slide. These are the features that we are now testing, and if they all turn out not to work, or not to be, um, of interest for web authors, then we will have to come up with new ones. Because we have promised the European commission who is funding this that we are going to, um, to promote or provide 10 – at least 10 features when, when the project is ready. But as we are just starting the testing phase, I can’t tell you which one is the best candidate right now. We are testing all the 10 features, and then hopefully many of them will be good enough to present to the world. And if not, we will have to have make another round, and uh, yeah create new ones.

[Robert] Cool, thank you very much. Well, at this moment we don’t have any other questions. Is there anything that you want to add on this couple of minutes? Uh, you didn’t cover on your presentation? You have, like, another five minutes if you have anything, or we can make a break.

Susanna: I think I’m, I, I hope that people will connect with us, and I’m happy to answer any question via email or Twitter, or just reach out to us and we will be happy to discuss this more, because I, I think there are a lot of interesting things and we are really eager to hear also from WordPress developers, and designers, and users, and I hope that, that you will all be interested in this project and also reach out to us to to learn more.

Roberto: Awesome. Um, well, we can give it a couple of minutes and see if there is another question on YouTube.

Okay, we have a question. Uh, the testers need to be multilingual?

Susanna:: Good question, thank you. Uh, no. The tests will be performed in English, so um, you would need to understand at least the instructions in English, because the prototypes are in English. But otherwise we don’t have any, any requirements on the testers.

Roberto: Thank you, Susana.

Anyone, if anyone have another question, uh right here.

Susanna: I think it’s in the middle of the night in many countries.

Roberto: Yes, I, actually for me it’s past midnight but, but, what’s your time on your country?

Susanna: It’s now 8:30 in the, in the morning, but it’s Saturday, so I don’t think anywhere, exactly.

Roberto: Yeah, most of Europe people, yeah, Susanna: still asleep, I think, yeah.

Roberto: just waiting for this Saturday. But again, thank you very much for sharing with us and taking your time, uh get, get um, I think we’re gonna finish this one, uh.

I think Lnn is making a question – we say, how much time it does it take, how much time does it take testing?

Susanna: That’s another good question, um, well you can spend as much time as you like, of course, but you can test just one feature, and that will take from five to, I guess, mostly 10 minutes. We, of course, hope that you test all the the features, and you can also come back, because we have this iterative process, so then you can test it again and see the refinements. But, but you can just spend five or ten minutes and do one test, and we will be happy, and if you are really interested and keen then you can come back and test all the 10 features. And that will then take, maybe, one and a half hour or so, and you can also come back and do it more times, but but there’s no, no specific requirements on time needed. So you can just do a short one and see if it’s interesting and then, hopefully, you will stay on and do more tests.

Roberto: And then we have another question for Ahmed, uh if you have to name one thing that it will be your biggest achievement in accessibility what it would be?

Susanna: My biggest achievement? Oh. Well, personally, I think that, that we are hiring people with disabilities in our company. I think that is the biggest achievement for to me personally, I believe the key to inclusion is to have a workplace, um, and really that we have been able to not only hire people ourselves, but also help other young people with disabilities to get into the workforce. That is what I’m most proud of in my professional life. That is not really accessibility, as such, but I think that is my sort of spontaneous answer to that question.

Roberto: Awesome, thank you very much as I think Lnn also ask, for the previous question, if the time that you describe is also include the training?

Susanna: Well, there’s no training, really here. In the, I mean the testing is just really simple prototypes, so you are doing the same thing as you’re normally doing as when you’re creating content. You are writing a text, you are adding a link, you’re adding an image, and then you are supposed to add the alt text, and so on. So you are performing this, this sort of the user scenario from the, from the web authoring perspective. And then there’s really a short instruction on what we want you to do to perform that test. So it’s not really, I wouldn’t say it’s training, uh, really. So it’s a short instruction, then you perform the text, test and then that is screen recorded, so we can see what you do. So that is sort of fine, but we also always ask you if you have any comments or suggestions or anything. We’re really happy if you want to share with us your thoughts, so you also have the possibility to answer some questions afterwards, if you, if you would like to. But it wouldn’t take more than 15 minutes just to do one, one of the items.

Roberto: Okay, awesome. Um, I think that’s all the questions that we have. Um, we’re gonna finish the presentation right here. Thank you very much for your time. Um, just let the people who is watching this presentation, don’t forget to attend our next talk.

Um, who it will be how to use ARIA in forms, uh, at 7:00 AM UTC, presented by, by Ryan…Rian, Rietveld – sorry, my accent, but, yeah. Thank you very much, and we see you later. Bye!

Susanna: Thank you.

The Case For Captions

Abha Thakor: The Case for Captions

Captioning online video content should be an integral and indispensable part of the making process.

Abha Thakor explores the reasons for this, in a talk which covers the importance of inclusion, accessibility and a wider business imperative. She looks at the availability of today’s technology, and how it has moved from the clunky subtitling of the past to the smart use of closed captioning. She also considers the impact of AI on the process.

Participants will be given details of how to get involved in the WordPress community’s work on subtitling. You will leave this talk with the ability to improve your global messaging reach through easy to understand steps.

View Abha’s Presentation

Transcript

Abha Thakor: Well, thank you for for joining me and I’m gonna see if I can minimize that window and hopefully that won’t disturb anybody who’s trying to read the text as well. So, um, it’s lovely to be here for WordPress Accessibility Day. We’ve got two cameras running, we have a number of people in our team and in my work organization who are going to be lip reading today and they’re also doing a live translation because it’s translation week for for WordPress as well so um you’ll have to be very understanding that I will be looking at two cameras at one point just so that we can help those people who are struggling a little bit yesterday when they were trying to follow um in the practice, okay.

So I’m Abha Thakor. I’m from NonStopNews and NonStopBusinessSupport and I’m here to show you why you should love captions and how you could become a subtitling champion we’ve prepared a link for those that I’ve said who are, who are lip reading and I’ll try not to to move too far away from the the text and the jargon that they’ve already inputted into the AI system so that they can have an easier time of translating us today.

So, we’re going to look at “to caption or not to caption” but in my view actually there is really no question. We need to be doing this. I’m not going to be focusing on the how but more on the why. So why are we talking about subtitling today? Well, simply, if you do not have a finished video product unless you have subtitles. And hopefully through this presentation, you’ll see why that is so important.

This streaming video has increasingly accounted for a large proportion of all internet usage.

The most recent survey by statistia – a company which researches key aspects of business data worldwide – found that video represented up to 95% of all internet usage in some of the countries they sampled. Research since continues to show that in most countries the trend for online streaming of video has had a big growth trajectory, During the pandemic, not surprisingly, it has soared.

The effects of the COVID situation have resulted in a further explosion of demand for online streaming access from businesses, from social organizations, special interest groups, and private individuals across the world. So online video consumption really cannot be ignored by businesses or other organizations. It needs to be part of your marketing strategy,
your engagement, your knowledge management, and ultimately your sales.

In increasingly crowded marketplace and with more sophisticated production software now available, it makes no sense to omit that final part of what should be in all of our checklists, and I hope after today you will have this in your checklist: subtitling and captions.

First, let’s have a look at what these things really are. That is, of course, from being life-changing for some of our colleagues, vital learning tools, and increasingly essential for search engines.

Basically, if you don’t have that subtitle this is what is going to happen:

People won’t be able to access it in a variety of mediums, and search engines certainly can’t. We’ll be talking more about that through this presentation. Now, we often come across two terms of subtitles and captions as if they mean the same thing. But there are actually three terms that are widely used in the media production.

Subtitles, open captions, and closed captions as you can see on the screen. That’s what the technical definition is. They do not actually mean the same thing. At the moment there are three main types.

We’ll come to images and the new type of captioning that is coming in later on.

Fundamentally they all mean a visual on-screen text representation of the words heard in an audio track, or a video, or your broadcast. You’ll often hear – hear these terms interchangeably, but you do need to know which of the terms that you actually need for your product and your organization, depending on the environment, and the ecosystem, the regulatory framework that you’re operating in.

So captions come in two varieties – open or closed. Both go further than simple subtitling.

They not only represent the spoken words, but they include text descriptions of all non-verbal sounds which may form part of that presentation. So captions amplify the dialogue, the narrative, with descriptions of sounds that would be included through the subtitling of that spoken language.

So, for example: woman coughs loudly; sound of breaking glass; dog barks.

(I always wait for my dog to actually see that as a command.); doorbell rings; music plays. This is to try and get as much closely to the atmosphere and feel of the audio track. It’s much more interesting for the person who can’t access the words or the sound, to have a better idea of what is being represented.

Providing captions can have a greater implication on production time, and costs and subtitling. But both continue to become easier to create and synchronize through the development of enabling software and artificial intelligence. As a firm, we do a lot of work in this area, and it’s so wonderful to see it becoming more accessible. I’ve mentioned two forms of captions.

Open captions are captions embedded in the video at production stage, and are on screen permanently. The viewer cannot choose whether they appear or not.

We’ve – I’ve kept this slide up, because when I’ve done this presentation before, people have said they felt it was easier for them to, to understand the difference when we have this up. So ..um, closed captions are interactive in what they can be turned off or on or off at the choice of the viewer. A quick toggle of the remote control and, hey, presto – captions on, captions off. A bit like the mute button for audio volume.

Captions can be a legal requirement in some jurisdictions for videos intended to be broadcast in public, and they’re enforced through the various countries legal systems. As ever I would urge you to look at the regulations in your own jurisdiction – and you may find these in a variety of places, from broadcasting legislation, equality, anti-discrimination regulations, to communications and media laws – but also in the ethos and values of your own organization.

The penalties for infringement can be high, and it’s worth including the captions and subtitles in your initial brief, in your scoping. In some locations require – the requirements apply to both public bodies and organizations with registered charitable status, so do your homework. Check it what it is that you need.

So, apart from legal compliance where it applies, why, why should we provide subtitles and captions? Well, subtitling is a way to make that video material more accessible. It actually makes it possible for some one with a hearing difficulty to hear the message behind the video at all.

But there are a lot of myths about subtitling and why you shouldn’t do it, so hopefully today we’ll debunk some of those myths. Hearing disabilities can be caused by a large variety of reasons, from disabilities from birth, trauma, injury related, and from actually just getting older, as well as health related issues. They may be temporary, they may be permanent.

We have a number of colleagues in my work and in the WordPress community who have some kind of hearing impairment. In our work in both of these areas we’ve been running awareness surveys around what people think subtitles are for. Our panel found that there were myths about subtitling that continue to exist, and continue to be shared.

People think that it’s outdated. They think that when someone appears on a video, words pop up automatically underneath as soon as someone clicks a button. They thought that subtitling is there for people who do not have a computer with sound – you will be surprised how often that comes up as a reason!

People also think that accessibility does not relate to video. And the one that we increasingly hear – it’s for old people, or for people who really aren’t our customers, so therefore we don’t need to worry about it.

The American National Association of the Deaf estimate – well, I’m waiting for my slides to catch up on your screens – so, I’m hoping that they have.

Um, so hopefully you’ve got, you’ve got a screen that shows a blue screen and orange, but if not I’ll talk over it. The American National Association of the Deaf estimates that five to six percent of the global population has some form of hearing impairment. That’s, at the time of their investigation, translates to around 350 million people, who, assuming they have access to technology in order to watch streamed video, would be able to benefit from subtitling or captioning.

Even allowing for a proportion of those not having the means to access the internet there is still a massive potential audience to cater for. On the 1st of March this year (2020) the World Health Organization estimated that 466 million people have had a disabling hearing loss. It predicted the figure rising to 900 million by 2050 – to 900 million by 2050, unless concerted action to address some of the preventable causes was taken.

Hopefully you’ve now got the, that figure on screen. Just, let’s just think about that figure. That’s a predicted 900 million people by 2050 who could miss out on your messages, online promotions, and offers, and that’s just from people who can’t access your message because they have a hearing issue. On top of that, you have all those people who can’t access the message for a variety of other reasons.

Let’s have a look at some of those reasons.

These are just some of the ways that subtitling or captions can be used and are being used.

Subtitles facilitate the translation of the original language medium into other languages. Today, I’ve provided subtitles which are being translated into a number of languages as part of WordPress translation celebrations this week. To get your talks and videos used around the world, make sure you add subtitles, as it makes it so much easier, and in some cases actually makes it possible for others to translate the text into their own languages. This might be that they translate in their heads as they watch along, or that they use apps when they’re reviewing what you have told them.

For those of us whose first language is English it can be hard to comprehend – comprehend how much this translating function of subtitles matters in everyday life. Subtitling television programs, films, and online video entertainment is much more common in countries where English is not the main language.

The need for these language tools to be part of business plans and marketing campaigns is frankly much better understood in many of those countries. We also need to remember: we live in a multicultural and multilingual world. Your customers and stakeholders in your target locations are likely to come from different language backgrounds and have varying levels of fluency. Trends show that the world’s increasing appetite for online learning is matched by its demand for being able to access videos while on the move.

Train and bus and travel with improvements in wi-fi have seen commuter journeys turn into mini classrooms for one. With noise cancelling headsets creating the complete study experience.

In our current times, audiences often watch and want to watch video content in busy households, particularly during COVID. Or in bed, and don’t wish to disturb other people. Much of this learning and assimilation is only maximized through having subtitles. They also cut through the noise that surrounds us on public transport, shared offices, or wherever we choose to access the information.

There may be times when it’s helpful to watch a video individually or as a group without the sound on in places where background noise may make it impossible to have sound at audible levels. Please think about those occasions where you have found that, and how much more useful you found it, if there had been subtitles.

Many people do not have the sound on on their mobile devices while traveling or in offices. Increasingly, content is being accessed, videos are being played on those mobile devices.

A colleague of mine in the subtitling campaign, Sean Cunningham – Siobhan Cunningham from the Yoast Academy’s e-learning team, she, um, she always explains: it is about the word.

If you see someone talk you might take in some of what they’re saying, but by having subtitles and being able to follow the words that makes it easier for you to follow what they’re saying, and for your brain to prepa… and – and you’ll have to bear with me, I need to just check. The lip reading people just asked me to pause for a moment.

Okay. And it makes it easier for your brain to process it, and remember it, and better understand what you are learning.

She said that if she sees words that tell her about what the person is talking about, she’s more likely to listen, more likely to stay, more likely to finish the video. We all know. We’ve all seen the statistics of how long people stay on video. People’s attention spans can be extended if we help them through subtitles. If we’re not in quiet environments but need to access learning material online, it may be the only time of the day that we can spend our own development or keeping up with training, that is now essential for our everyday work.

All those really great talks from WordCamps that you would like people all over the world to access? Subtitle them. Someone might then be able to access them at a time that suits them – an environment that they are currently in.

From an education and training perspective, provision of visual representations of technical terms, jargon, and acronyms allow for the reinforcement of learning.

The three key learning styles that I’m sure you’ve heard of elsewhere are visual, auditory, and kinetic. And you should hopefully see these now on screen.

That basically means that people learn visually through seeing, auditory through hearing, and kinetic through doing. All of these styles of learning are better supported when the visual picture and the audio commentary is reinforced with a visual underscoring of text.

Even without the express intention of educating, provision of subtitles can even subliminally help speakers of other languages acquire or appreciate subtleties of the original language. Usage, slang, the idiomatic phrases we all use, spelling and much more.

The automated transcription software service Sonix has found that 80 percent of all social media videos are viewed with the sound off. A statistic which might make you think that doing that defeats the object of the medium. But be that as it may, it also poses an opportunity to reach that switch off audience more effectively by using subtitles. Being able to read information to absorb the content helps us to understand, to respond, explore, and apply that knowledge.

Subtitles really do reinforce learning. I’ve worked in learning and CPD for nearly 20 years or more now, and every time we have subtitles, I know the learners are learning more, they’re accessing more, they’re retaining it more, and they’re being able to apply it.

Subtitling can now be achieved increasingly quickly and economically through newer technology and online tools.

Please try and keep that in your minds and think about how you would want to access content.

People often say “is a picture worth a thousand words.” Well, if you can’t hear the words of a talking picture and then no clues of what is being said, how is that person going to take in your messages? How are they going to learn?

Do you really want them to just be watching paint dry?

And for the people in the audience who are going to tell me that watching paint dry is interesting… then we’ll have that discussion after the talk.

So… search. Search loves subtitles, and for a lot of people they think, well, why would it? Because search is an index. Video, well, Googlebot and and other spiders which are crawling around – (I always have to shiver when I think about these spiders), um, they’re incessantly indexing web pages.

They can’t, they can’t watch those moving images and they can’t listen to the audio – not yet, not really, but they can hone in on captions. Videos with closed captions rank higher on Google search than those without according to a study carried out by US broadcaster American Digital Networks.

In addition, some advanced searches allow for more specific searching of closed captions. It goes without saying that additional traffic directed to your site by a high search ranking, will lead to an increase in traffic to other landing pages or content you want to bring to the party. The benefits for engagement and SEO combined make a powerful argument for ensuring your online videos are posted with accurate, legible, closed captions.

Legible I’m gonna stop on for a moment. Please, if you’ve not sure about how to do legible, there have been other talks during this accessibility session which will explain how to do that, how many words to have on screen, the font sizes.

Please, if you’re going to create captions, don’t create them so small and all jumbled together that people can’t access them.

The Scottish audience development company um Culture Republic it runs workshops in captioning and they assert that adding captions to YouTube videos has shown to increase completed viewings by around seven percent. That also helps your search engines, ratings and it helps people find your content, stay on your content, and refer people. They also quote the research which shows that 85 percent of Facebook views occur with muted audio.

Again this backs up the videos, that this, the um, Sonicx statistic. Um, I will, I will just apologize I’m running a. running a temperature at the moment, so I’m having a little bit of difficulty trying to to, um, to focus on the on the mouse on the screen. So apologies if the screens aren’t changing quick enough.

Basically, captions serve as a reminder to the viewer that we are now at a segment of particular interest and to switch full attention back to it.

Reinforced messaging through the use of captions is so important. If the consumer is going to behave in a selective manner, we need to ensure he or she knows when to exercise that selectiveness.

Captioning allows that to happen better. We’ve talked about the bots, but just to reinforce it – search, search, search. They don’t understand it. If they’ve got no value they will move on.

I’m going to take you down to what’s happening now with artificial intelligence. As with all things tech related, times move on and processes are in a state of constant evolution. Captioning for the last 10 years or so has used differing technologies including automated speech recognition known as ASR. ASR is a predictive tool.

It generates word sequences from an audio signal. The result is not always a totally accurate rendition of what is being said. The nature of live streaming, especially, where conversation is interactive, presents challenges to ASR. Not all speakers will be clear enough in their delivery. Background noises in their environment can seem like a verbal sound to be represented in captions to ASR.

How well has it been taught to differentiate between sounds. If discussions get heated, people may speak over each other or cut another speaker off abruptly. Again, resulting in a muddled set of captions.

In fact you don’t even need that to happen – you can just have lots of different voices that are similar and ASR may not pick it up.

Strong regional or national accents, or varieties of inflection from different speakers can confuse ASR, and result in misrepresentation in captioning. I’m sure we’ve all sat through presentations, even TV programs, where the captioning has been so obviously incorrect it can almost be comical. Except for those who rely on it. For them it really is not a laughing matter.

You’ll all probably be aware of the variety of accuracies and inaccuracies delivered by today’s range of streaming platforms in terms of captioning. The consequences of misrepresentation via captioning are the same as the consequences of mistranslating between two languages.

To be effective, the ASR system, like Web Captioner and others, needs to have been prepped with a wide vocabulary and specialist words. It can also be taught to learn in multiple languages, and any specialist acronyms, jargon, tech terminology, and so on which it might encounter.

If it doesn’t recognize the content of the source signal, it cannot predict the patterns of words. You will sometimes see a “next best” word or phrase appear, which of course will often make a nonsense version of the original. So one of the most often questions I get is, “Oh, I’ve added AI to my streams, I’ve added, um, an automatic captioner, that’s all I need to do.” No. You need to train it. You need to help it. If it helps, think of it as your pet dog. If it doesn’t know what the instruction is, it won’t know what to do. If it’s never heard the word “JavaScript” it won’t know what to say.

During the past year, the huge jump in demand for online conferencing streaming due to COVID restrictions has put a greater demand on existing automated speech recognition systems. AI engineers, we’ve all been working really as fast as we can to try and make things easier, to make things available. But a plea: help us do so. Help us test what is out there. Help us – tell us where it doesn’t work, because then we can change it.

The focus shouldn’t just be on speeding up the process, it needs to also be about improving accuracy.

FaceBook live has been using um AI. It’s been do, it’s been doing away with one third of the components of a basic ASR system. And train, so basically, it’s been getting rid of the pronunciation lexicon and it’s training the part which identifies the individual constituent sounds of a word, to directly predict the characters of a word. A language model then determines the interrelationship of these words.

Say, that frequency of use and words which are commonly strung together becomes what it thinks you’re going to be saying. The slimming down and improvement of accuracy in ASR was achieved through using tools, like PyTorch, was an open source machine learning library derived from Torch library. Please do have a look at it. There are lots and lots of ways you can help improve that.

Captions open up even more accessibility options.

We’ve already talked about the impact of captions for translation. Another development brought through artificial intelligence is the visual images being created based on the descriptive text contained in that caption. This is going to be our next generation of captioning tools. It’s in its very early stages.

We have AI which can produce captions from data contained in images, but only this week the Allen Institute for AI announced it had developed an AI which produces imagery created from those text captions. The results, for those who may not have seen them, are so far restricted from looking like Picasso might have knocked up in a hurry, but their principle is that they generate an image. They allow somebody who may not be able to understand without an image added to those words the full context of what is being said.

Or they may not be able to process it because their neural processing is different. This is going to take what we can do with captions to a whole new realm. It’s going to make even more accessible information. But if you don’t have captions, if you haven’t introduced subtitling into your business plans already, when this technology develops you won’t be able to use it, because it relies on having those subtitles in the first place.

Eventually, the work that is happening on these developments is that the final image will be of photographic quality with color, scale, and proportion all correctly represented. So when I’m talking about an apple and how that relates to a concept i’m describing, that would also appear. If I’m talking about a flowchart, eventually that would also appear. Again, it requires us to help teach it, but that is where the technology is going.

A simplified explanation of that process is that a machine learning algorithm has been trained to recognize conceptual connections between language and visual data. An illustration of this would be that given that text “a clock tower in the middle of town,” the technology produces an image of a taller tower-like structure with a clock at the top surrounded by less high buildings.

Suddenly, people are receiving the spoken word, the written word, and a visual representation.

Just think how powerful that combination is going to be.

We already have tools like Web Captioner. If you are doing an event and you don’t have resources to purchase live captioning done by humans, which is a preferred option, please look at tools like Web Captioner. There are other ones on the market, too. They use Google APIs generated to, to create captions, but invest the time. Invest the time in actually training it.

We tested this in a number of WordCamps including WordCamp Dublin last year, and we asked all speakers to practice saying a couple of sentences from their talk into Web Captioner. Picking the right option that successfully produced the text that they were saying.

We also asked them to tell us the words that and the jargon that would not be necessarily known or that may with a strong accent sound slightly different to what Web Captioner had learnt already. We inputted that text into Web Captioner, and it was wonderful to see that in most cases that improved the captions that we produced to a 98% level. It didn’t get everything – it still had issues.

“Huddersfield” is always going to get – be heard wrong. It’s just one of those words that captioners – an auto captioner – just does not get right, but it’s a huge step forward. Since WordCamp Dublin there has been much more developments and tests in tools like this. Web Captioner has, in the last few weeks, released a beta version which you can test, and you can add to things like Zoom, to other video conferencing materials.

We’re currently doing tests on this in the WordPress Marketing and WordPress TV teams as well as tests with other facilities like AWS, and how we can use auto-generated captions from, from those facilities to then correct and start subtitling more and more videos, and with your help making that possible.

OBS, which is a open source software also now has a beta version of captioning. So, if you are doing a talk and you’re going through OBS software for that talk, and connecting that to Zoom, StreamYard, whatever you’re using. There is a beta version that you can now enable and you can, and that live auto captioning can appear. The more people use it, the more it will improve, but we need people to use it, test it, to give feedback, correct it. We need more accents to do it, we need people with more accessibility issues, to actually try the software. In my work, we have a number of people who have hearing difficulties who are actually now testing some of this software.

I focused this talk on the why part of captioning.

If you’ve been convinced – and I hope you have – that this is something you really should be doing, I suggest you look at the presentation slides from yesterday’s speaker, Meryl Evans, who’s given a handy list of tools which you can use.

People don’t like being in the dark. Help put the light on.

You can also do that, and make a difference by becoming a subtitling champion. It really is easy. I will guarantee you will learn a new skill, and we will help you learn that new skill.

You can also play a part in knowledge sharing, and the skills that you learn, and the knowledge that you share, is much wider than the WordPress environment, the open source environment. It can make a difference in your products, your company, your culture. It can make a difference in how we deal with information and people’s access to it. Please invest in subtitling. Please don’t put up half thought about solutions. Work out how it’s going to work.

For a lot of the, um, the camps, today’s event, too, they also have another option where you can read subtitles that can be changed into different fonts. YouTube subtitling for live feed is really difficult to read. It’s very difficult to process if you have any kind of neuroprocessing issue. It can be very difficult if you just wear glasses.

Support the WP diversity and tech accessibility. It will give you huge rewards, but it also means that the people who really need the technology don’t have to be making that call themselves all the time. This should be something that we do from scratch, that we include in all our planning.

Finally, tell your speakers when you’re planning a WordCamp, or any kind of online event, or even in-person event when we’re allowed to again. Help them understand about providing text even after the event so subtitles can be done quicker.

Help them even commit to subtitling something of their own. With the auto captioning tools that now exist it doesn’t have to take a huge amount of time. Sometimes, where it takes time is because of unfamiliarity with where the speakers going, what they’re talking about, and the jargon.

Please come and join our campaign. You can make a difference for inclusion, diversity, and user experience. It will improve your organization’s messaging and reach. It will make it different. It may even help you one day.

Thank you. You can find me on @NonStopNewsUK on Twitter. If anyone, I know I have had a lot of questions that have come in on on twitter as well already, um I will make sure I answer those in the next few days. And I had a request for a couple of people who’d like to do an interactive session learning how to use Amara and we have got those planned, too. So please do contact me on Twitter on @NonStopNewsUK, on Slack on abhanonstopnewsuk and, we will connect you with those things.

Amanda: Okay, thank you so much Abha. That was wonderful. I’m so full of information. We really appreciate you sharing your expertise and breaking down the kinds of captions and subtitles, how essential they are to learning.

Um, there was a lot of conversation in the the YouTube stream chat, too, and we do have some questions, that we have less, a little less than 10 minutes to get to, so um if you don’t mind… Io the first question that i’m going to ask, I think the first question was, I think mostly answered in the chat, but they, their comment was: “It’s very interesting about non-English speaking countries having adopted captions at a greater rate. Which countries do you feel set the standard in terms of subtitles.”

Abha: The, the Spanish film industry has actually been really good at this, and there has been some working with the, with the US as well, which is it’s been very good in terms of how subtitles are presented. And, and, I think that has been a model that lots of other smaller broadcasting things have taken on. So, in terms of actual conferencing, the, where countries have a regulatory requirement to caption, it is worth having a look at the processes that they use, because they tend to be more tested, and they have tended to look at the various options and how they can do that. So, again America has had a lot of regulation on that. Um, some of the big charities have also done some, some work on that and I can share some links after this talk, as well.

Amanda: Okay, perfect, thank you.

Um, the next question is: “SEO and YouTube are hidden gems for building our online footprint, as you uh, mentioned, and are there, um, sorry are there rules for effective content for web pages, links, etc., for captioning, to help videos rank higher, or rank more?”

Abha: There’s a lot of work going on at the moment um in terms of readability of captions, and there’s some work that’s been pioneering in, in Switzerland about actually how algorithms can give a higher rating to captions that are readable. So, captions that come in huge blocks of text with no punctuation, would be, would have a lower, um reach. And so there are some testing that’s going on with a couple of the, the big providers, for how that can be be more promoted.

And you’ll notice that some automatic translation things, particularly YouTube, misses out punctuation. And say, when people are then watching the video back, actually the, the bots are not understanding what’s being said, because there’s no punctuation to help them. So, if you put punctuation in, that does increase the searchability of that, and the ranking.

Amanda: Okay, great, now that’s helpful, helpful to know. Okay, got a little more than five minutes. Um, make sure if you do have any other questions, um, to throw them in the chat. But the next question for you, Abha, of that, is: “What is your opinion in delaying audio and video signal at live events in order to give more time to live transcribers or AI tools to be correct with the subtitles or captions, and deliver them synchronized?”

Abha: I think it’s a really big problem. I think, just looking at today’s, I know my screen over here, which has been live, has been quite behind what I’ve been doing.

A lot of the, um, the software like OBS has a real issue with this, because it creates a delay between video and, um, and what you’re actually saying, which causes a difficulty for anybody who’s lip reading, because they’re lip reading something that is then out of context with what’s behind them, or what you’re actually trying to emphasize, so they could see an emphasis on something that is actually no longer being talked about.

So, um, I think that as events, and as we become more online events, we need to think about the, putting a delay in. But we also need to do some simpler things, like encouraging, um, text to be available. The, the captioners will work a lot quicker if they know what jargon is going to be used.

So, Word, WordCamp London, our live captioners are, um, very accurate. And one of the reasons for that accuracy is because they have the jargon, and the terminology, and, it’s quite often, the talk words from similar speakers, and that means that they don’t have to build in that extra delay for everything to catch up again. It still doesn’t avoid the internet delay that you will have, with video, but there are easy fixes most of the software, particularly the paid-for software, does actually allow you to compensate for that delay.

And I think there is an expectation now that, um, particularly in the commercial world, and maybe it needs to come to the, the open source, too, is that if you’re putting on an event, help train your speakers in how to do that. Because otherwise, you will have people who are, regardless of what you do in regards to the delay that you put in, their camera may still be adding in several seconds of the delay in addition.

Amanda: Right. Okay, okay, thank you for that response.

Accessible Navigation from Scratch

Adam Berkowitz: Accessible Navigation from Scratch

No matter how users get to your sites, they deserve an inclusive, accessible experience. A main navigation component built with accessibility in mind goes a long way towards this goal. Fortunately, WordPress leaves the implementation of accessibility best practices up to theme and plugin developers. This means that by carefully thinking about how to build a menu from ”the ground up”, we can help all of our users use our sites better.

This presentation will demonstrate:

  • The basics of menu accessibility
  • How a custom WordPress Walker class can be used to create accessible markup
  • How to write flexible and extensible CSS to ensure that the menus are usable even without javascript
  • How to implement javascript that enhances the user’s experience

Watch Adam’s Presentation

Transcript

Mike: Hello and welcome back to the WP Accessibility Day, one o’clock a.m. UTC session. If you have any questions, our chat moderator Kayla will be answering them.

So please put the questions that you have for our speaker. Please remember that we are welcoming community and there is a code of conduct in force during this entire event and you can find slides and Twitter links all the information on our website at wpaccessibilityday.org.

Our speaker right now is Adam. Adam is a Web Developer at the University of Connecticut, or Uconn in the role at the Office of University Communications. He specializes in WordPress Application Development and Web Accessibility. He’s also been on the Accessibility and Diversity Committees at Uconn.

These include Uconn’s Innovation and Communication Technology Task Force and University Communications Diversity Equity and Inclusion Task Force. When he’s not working he enjoys reading, martial arts and spending time with his family. So it’s my pleasure to welcome Adam and his talk, Accessible Navigation from Scratch.

Adam: Hi everybody uh let me see if I can get my screen shared, make sure this works correctly and I think that should do it. Okay, excellent. So uh thank you very much for that terrific introduction um and thank you all for being here with me uh.

I hope that you’ve enjoyed everything so far and learned a lot from the other presenters today. So the web team in our office focuses on top level and strategic marketing sites, such as uconn.edu which is our main university website, our regional campus websites and hospital network. And as you can imagine, these get a lot of traffic and we need to make sure that they are as accessible and and welcoming as possible to everyone who visits.

A few years ago, um the World Accessibility Day New England Conference was held at the university. Pretty much everyone from our office’s web team attended. As the conference went on, it became very obvious, very quickly, how much work we needed to do. One of the things we looked at on our sites, was the main navigation components. As a state institution, we had some pretty specific requirements.

For instance, I think that at the time we were still supporting Internet Explorer 10 and we also wanted to make sure that for instance, people could use the navigation elements without JavaScript and we wanted to make sure that we were in compliance with all the relevant state and federal laws.

And we also wanted to do the right thing and have our websites be as good as they could be. So one of the things after that conference was, I wanted to improve these navigational elements which leads us here today. Now I’d like to point out that this project and presentation are based on the majority of the needs for sites currently in use at the University of Connecticut.

So even if you choose not to use the specific approach I’ll show you, the basic ideas can be adapted to a wide range of sites. Now unless you’re building a site which is only one page, people are going to have to find their way around and not only that, but as a web developer I know that if I put something or really anything interactive on a site, people are going to try and use it.

So that means interactive elements may get used in ways that are unexpected or unfamiliar to me. As developers, just like we validate forum inputs, because we can’t anticipate every single thing someone might enter, we need to consider how people will use our sites besides with a mouse and screen. In terms of accessibility, this concept applies to visitors with permanent disabilities first and foremost.

At the same time, some visitors may have a temporary or contextual accessibility need with respect to a site. Their mouse might break, they might be working on an airplane going through turbulence, their dominant hand might be broken or maybe they visited an eye doctor and got their pupils dilated, we just can’t know.

We can’t know the situation for every single person who visits us, but we can try to design, build and prepare sites in the most equitable ways we can. So how can we make sure this happens? Well we try to keep our sites POUR.

You may have heard or seen this acronym POUR in the past, but what does it mean in the context of a navigation menu? In my view, Perceivable means if a visitor can physically see the menu, it must be visible or be able to be made visible.

If that visitor can’t physically see the menu, a screen reader should detect and announce items. For the menu to be Operable, someone should be able to interact with it, with the mouse and or the keyboard. If someone uses a mouse and there are sub-menus, they need to be resilient. That is, the sub-menus can’t disappear immediately.

Further, all the items in a sub-menu need to be reachable with a mouse or a keyboard. For the menu to be Understandable, it should behave in a predictable way. Therefore, it needs to maintain visual and auditory consistency. For instance, if an icon toggles between two states, such as open and closed, it should do that predictably.

Menu items should announce themselves in a predictable way for people who use screen readers. Finally for the menu to be Robust, in our case, we wanted to ensure that it would work at least to some degree without JavaScript and this meant keeping the style, the the style and user experience approximately the same, as if there were JavaScript available. In order to accomplish these goals, we identified three cases for menu items that are typical for the types of site we build.

Overall, I think these cases are pretty common. First a link by itself which we can easily rely on WordPress to handle. After that we identified two types of sub-menus.

Those that have a link to a top level page with sub-pages beneath and menu items that need to be, and menu items that need to solely act as a toggle to a series of sub-items. These are fairly typical types of elements in a nav menu.

Either you want to go somewhere by following a link, or you want to reveal more choices through some kind of toggling action. So now that we’ve got our goals through the POUR acronym and our cases, we need to have a good idea of what elements are available to build with.

So before we try to style anything or provide interaction, we need to get a handle on the HTML tags and properties we want and how they’re going to get rendered to the page. This actually has a massive impact on the overall, all the decisions that get made later. For instance, the style, JavaScript, everything else that comes after. If we don’t have a good concept of what we have at the beginning and where we’re trying to go, then we won’t be able to add on.

Fortunately, WordPress has a built-in way to generate a menu with the wp_nav_menu function. And I’m sure that many of you are familiar with this.The good news is that wp_ nav_menu is easy to use and get started with. It accepts a list of arguments that provide a fair amount of customization quickly and then after that, as soon as the page loads, it displays a menu.

One argument which I tend to change immediately, is the container argument. By default, the menu will be contained by a div. This isn’t ideal for a navigation menu of this type, because a div doesn’t impart any structure or meeting on its own, but you can set the container to a nav element.

When you do this the final output of the menu will be wrapped in a nav tag. Browsers will then detect the tag and create an implicit ARIA role for it, with a landmark.This is a useful accessibility improvement, because a screen reader will have an easier time parsing the document’s content. There are other arguments which you may be familiar with as well.

For instance, you can set custom classes or IDs or set a theme location. Depending on how many navigation sections you have on the page, you might also set the container, or ARIA label argument as well, to further clarify the menu’s purpose. Now wp_nav_menu works really well for menus that have only one level of content. However, once you add depth to the menu, you’ll run into accessibility issues.

Let’s take a look at how wp_nav_menu interprets each of the three cases I shared, then we can see how we might improve their accessibility. The markup for a plain link is pretty straightforward. WordPress gives us just what we need, an anchor tag with a URL. This will provide a nice accessible link that supports people who use keyboard navigation and or screen readers. So far so good. After this, things start to get complicated.

In this case, we want to provide the ability to either click on a top-level link to visit a page or access items in a sub-menu. As the markup is now though, we run into a couple of things that could be better.

First the aria-label attribute on the link takes precedence for screen readers over the content inside of the anchor tag. So instead of a screen reader saying the inner text of the link which is About Us, it will say the value of the aria-label which is just sub-menu. That’s not terribly descriptive, because someone might want to click the link.

Next, aria-haspopup will announce to a screen reader that a pop-up is available, but we’re missing supporting ARIA attributes to associate the link with a specific sub-menu and indicate if that sub-menu is open or closed.

Last, depending on how the sub-menu is hidden and shown, there might be no good way for people who use a keyboard to get to it effectively. In addition, aside from showing the sub-menu when the mouse hovers over the list element, it’s not clear how we can open the sub-menu.

If we click on the link it’ll take us to the page and never open the sub-menu. If we just want the top nav item to work as a toggle, we can use a hash symbol as the link. Of course then we need to use JavaScript to prevent it from reloading the page when the link is clicked and this is something that internally at our department we wanted to avoid. However, if we get rid of the href attribute by removing any link from the admin area, the item can’t be focused with a keyboard. Browsers will interpret anchor tags without an href property like a span or any other inline element.

So we can add non-JavaScript dependent toggles to list of issues we need to handle. Now I’d like to show you uh what we’d like to add, how we’re going to get there, and what the result is. At this point, what I wanted was a portable, flexible, robust set of choices to build off.

That included things like being able to distinguish between links that go somewhere and toggles that don’t. More ARIA attributes to support people who use screen readers, custom classes, icons and so and so on. WordPress does provide some additional filters for navigation menus such as wp_nav_menu items, wp_nav_menu_object filters.

However, I wanted to be able to create the same kinds of markup across a variety of themes. For us, this would help with our brand consistency and visitor expectation from one site to the next. Fortunately, we can tap into an override the core class wp_nav_menu uses to generate the output for a navigation menu.

We can do this with a custom Walker class. To make it portable we can then turn that into a Composer package that can be installed from GitHub or Packagist. The project can then be used in many places.

The Walker_Nav_Menu class is part of a collection of Walkers which all inherit their functionality from a general Walker class. Now to be completely honest, when I started looking into this back then, the Walkers were incredibly confusing. Fortunately, these classes take care of a lot of the work for us though.

For example, we don’t have to worry about writing our own recursive functions to traverse menu structure since that’s their main purpose . Rather we can focus on logic for the markup our menu might need and what the output should look like.

Once you write your a custom Walker class you can then add it to the list of arguments used by wp_nav_menu.
When you do this, all the markup created by wp-nav-menu will flow through the Custom Walker. This means that you can override any of the Walker nav menu’s methods, with your own and that’s what wp_nav_menu will use. There are four class methods available which will let you start a level, start an element, end an element and end a level. In this context, we only need two of those. The one to start a level and start an element.

Every item in the menu is going to pass through the start_el and end_el methods.These methods are where we can add logic to support the type of markup we want to build.

Consequently, the start-el method was where most of the work for my Custom Walker happened. However, we’re not just dealing with a flat menu, we need to be able to add depth as well. When we create a nested sub-menu the Walker will pass the output for the markup through the start and end level methods as well. These create the unordered lists. We can take advantage of that to affect how the submenu unordered list elements get created.

In terms of accessibility, we can make sure we’re adding useful IDs to each level for example. By default WordPress wasn’t doing this, or at least I don’t remember it doing it back when I started this. It may do it now. This will let us match up the sub-menu to an aria-owns property in the items markup.

Now that we’ve seen the path the markup takes through the Walker class, let’s zoom in to the start_el method where most of the logic happens. Here I’ve written out some pseudo code to describe the flow through our custom start_el  method.

The shell of this is exactly what you would see if you took a look at the Walker. We can start by checking to see if the current item has children. That is, should it be a link by itself, or should it have a sub-menu with it.

If not, if it’s just a link by itself, we can output the start of a list item with a link inside and then finish. Otherwise, we need to know if the top level item should be a link or a button. If it doesn’t have a link, we can output the start of a list item with a button on the inside.

Otherwise, we need both a link and then a button as a sibling. This way we can account for all the cases in the markup. With this logic in place we can also do things like add additional custom classes and data attributes. Those can help us style and interact with the menu more easily.

Let’s look at some examples of the output that the Custom Walker I wrote, creates. So here we’re back at our plain link, but notice that we’ve added prefixed am dash classes in addition to the standard WordPress classes. These will help designers and developers create styles specifically for these types of menus without relying on WordPress classes. There’s also a no.js class on the element.

The no.js class will be removed from the markup if the menu’s JavaScript is available. If the JavaScript doesn’t run, this class will provide a fail safe for opening and closing sub-menus as I’ll show soon.

Here’s how we can improve the markup for linked items with sub-menus. First, instead of the link, links aria-label only saying sub-menu, it now describes the text of the link and gives a hint about how you can interact with it. You can obviously change this to whatever is appropriate for you.

Next, the button is going to provide a semantically appropriate element that we can interact with whether JavaScript is available or not. We’ve also added a couple more ARIA attributes to it.

Aria-expanded will indicate to a screen reader if the sub-menu is open or not. We can change its state with JavaScript later. Since we don’t want any text to show on the button, we need to give it an aria-label to provide context. If it’s, this is in case the button is focused. Inside the button we can then make space for an icon.

Here we can use the aria-hidden property. The aria-hidden property will hide only the icon from screen readers, but still leave it visually available. Since we’ve created screen reader accessible text on the button with the aria-label property and added the aria-expanded property, we can use aria-hidden to hide the icon without a problem. The aria-owns property will associate the button with the sub-menu that with the sub-menu with the aria-owns property or the correct ID rather. This works in a way similar to how a label is associated with a form element.

This is now markup that will let a visitor choose between clicking on a link or opening up a sub-menu.

In the last case, there is no link for the item. It’s replaced entirely by a button, so it can always act as a toggle. You can imagine that if you combine all these cases you can start to get a very flexible and accessible menu, but we’re not quite done yet. So far we’ve made sure that people will be able to interact with the menu with a mouse, keyboard, keyboard and a screen reader, but now we have to think a little bit more about design and interaction.

In our context, we wanted menus to work primarily through toggling action.

Actually, um as I was writing this, I forgot one thing and I added this slide today. So as I was preparing the presentation I realized I had made a mistake. Uh, in the navigation menu you want to have the current, the current page and or its top level parent visually highlighted somehow, and this lets people know which page is active.

WordPress gives classes for this, but also you want to make sure that the aria-current property is set on pages. Prior to WordPress 5.3 I think, you had to do this manually. Typically, you could check the, you could do this with the nav_menu_link attributes filter, but in checking, this filter doesn’t seem to work with the custom Walker I created. I’m not entirely sure why right now, so I’m going to have to go back and refine it, but that’s okay.

I think the more important thing is to note that even while trying my best, I still ended up missing something by accident, which is all the more reason for me to keep learning and researching.

Now I’m not going to go over everything in the CSS for this project, but rather focus on CSS related to hiding and showing content. When we use CSS to hide and show content, our choices affect how people who use screen readers and keyboards can or cannot interact with web content.

The ability to show and hide content is an important part of user interface and user experience design. This is especially true for navigation menus where we often want to hide or show content on hover or focus. If we’re not careful, we can inadvertently use CSS to create an inaccessible experience for our visitors. So I think that it’s important to understand the accessibility implications of how we use CSS to accomplish this.

There’s several cases where this chart is important. We might consider the case where a visitor who uses a screen reader wants to skim all the links on a site. We should be aware that if we use display:none or visibility:hidden, we’ll be hiding content from them. Another visitor may come to our site and have an issue with motor control and browse with a keyboard. Depending on how we hide and show content, our choices may make that user experience more or less accessible.

If the links in the navigation menu can be found in other ways or duplicated somewhere, that may not be a problem. But we should think about our choices and the overall context of the site as we make them.

One of the design requirements as I’ve said for our menus, was that some menus should be open and closed by toggle. The usual way to do this is with JavaScript, so that when someone clicks on the menu item, the sub-menu is shown or hidden. However, we can’t predict if people will have JavaScript available to them.

Some people may choose to not use JavaScript. They may block it entirely. There may be a delay getting JavaScript to someone, or there may be an error that prevents JavaScript from loading correctly. If we’re going to hide the sub-menus or content, we should still think about how we can give people a way to access those items.

One way to do this is the focus-within pseudo class which I’ll describe in a moment. You should be aware though, that focus-within isn’t supported by Internet Explorer or non-chromium versions of Microsoft Edge.

According to the analytics I looked at for our sites, this represented a small percentage of our visitors. Further, this pseudo class will only be used in the event that JavaScript fails altogether.

I think that the browser restrictions are a fair trade-off to get a very similar effect. Now if the JavaScript does not run correctly the no-js class and its styles will be applied to the menu. This includes the focus-within pseudo class. The Mozilla developer network says that focus-within, “represents an element that has received focus or contains an element that has received focus.

In other words, it represents an element that is itself matched by the focus pseudo class or has a descendant that is matched by focus.” In plain terms, this means that even if a parent element can’t be focused, if a child element can, the entire structure can be styled as if the parent can be focused, That’s why it can be applied to the unordered list element, instead of an anchor tag.

Typically unordered list elements can’t receive focus, but because the list can contain an element like a link that can receive focus, the list will respond to the focus-within class. Here’s a small demonstration. The markup for each of these menus is exactly the same. You can confirm as much by going to the CodePen linked in the slide. The only difference is the use of the focus-within selector.

You can see that on the left, the sub menu elements for the About Us menu item aren’t reachable by keyboard. However, on the right you can tab through them as you would the top level links.

Here is an example in context. The Admissions and About elements are links with buttons to toggle sub-menus next to them as siblings, symbolized by the carrots.

Without JavaScript enabled, you can see how a visitor might traverse the menu items just with a keyboard and of course, they can still use a mouse as well. There are still some improvements we can make to this though. For instance, maybe a visitor doesn’t want to tab through every single item of every single sub-menu to find the one they want.

In this menu, maybe they want to tab to the student life sub-menu without going through all the others. Now for that we need to start talking about JavaScript. We can definitely get some accessibility improvements to this type of menu if we add JavaScript.

In this project when a button is clicked the aria-expanded state changes from true to false or vice versa. The CSS is then set up so that the button’s icon will be changed depending on the state. This provides both visual and auditory context for sites visitors as they use the menu.

However, one group that we haven’t really talked about is developers themselves. We might create a plug-in or library that has technically excellent accessibility, but if it’s hard for a developer who’s beginning their career to use the JavaScript or CSS that’s included, they may choose to avoid using the project altogether.

The JavaScript or CSS for that matter, for this project, doesn’t require any additional dependencies or libraries. Not only that, but in the context of WordPress the relevant script or styles can be enqueued directly from Composer’s vendor directory. This means that developers don’t need any additional front-end tooling or bundling libraries to get started. All that’s required, is that they follow a typical WordPress workflow to enqueue dependencies.

Once the script is properly enqueued it also has a very small external API. There’s a configuration object which only has two properties, both of which are optional. In order to use the navigation JavaScript, there’s only one method you need to know. This means developers with a wide range of experience can use this in their projects.

This approach may be less flexible than something with a larger API. However, I believe in this context it’s a good trade-off, because developers with a wide range of experience will be able to use it more quickly. In fact, as a university, sometimes we have student developers who work on our projects.

The init method in the JavaScript starts by defining which menu the JavaScript should apply to. It then removes the Node.js CSS class from that menu. Finally, it runs a method to setEventListeners on the menu and the document. The setEventListeners method attaches listeners to menu items and prepares them to pass events to an eventDispatcher.

The dispatcher determines the type of event and routes it to a particular eventHandler. Once the event is properly dispatched to the handler, it’s responsible for the logic to set the state of menu items and properties. Depending on the state of the menu, eventListeners are also added to the document, so people can close open sub-menus, with the escape key or by clicking outside the menu area.

Here are two of the event handlers that are part of the JavaScript. They’re divided so that they can manage mouse or focus events separately. And while I was working on the project, I found it was best to keep mouse, keyboard, and focus events separate from each other.

That way I could handle exceptions for each type of event while encapsulating other parts of the code for reuse. One thing to note here, is that these menu follow the disclosure design pattern for navigation menus. As I understand it, in terms of keyboard support, this means that they’re required to do three things. Move among items with the Tab key, be able to select and activate items once they’re focused with the Space or Enter key and close items with the Escape key.

This is very different than an element with the Menu ARIA role. The Menu ARIA Role requires more types of keyboard interaction, such as the ability to use arrow keys. However, in the example linked in the slide, well it doesn’t appear to be linked in the slide, sorry about that.

We’ll get that later.

Uh the markup is trying to create something more like an application menu for a text editor. If it turns out that I’m completely wrong about this, and that I need to add more keyboard support to the menus, that’s okay.

The JavaScript is flexible enough to allow more types of keyboard events. The event routing would require only a few additional cases. From there I could add appropriate event handlers without too much difficulty, I hope. In any event, I think it’s always worth going back to review projects from the past and see how they can be updated.

Earlier this morning I read an article about WCAG 2.2 and for all I know I’ll need to bring this project up to date to conform at some point to that, or a future version.

In any event, here you can see the effect the JavaScript has on keyboard navigation. A visitor can now select among different sub-menus, however they like, with a mouse or a keyboard. Once the sub-menu is open, the next key press on the tab key will take visitors to the sub-menu.

You can also see the relevant icon stays changed as well, until someone leaves and closes the menu. This is done by tying the CSS to the aria-expanded attribute of the toggle.

Here are just a few resources collected into one place. I listed a few of the themes this project is part of, just to demonstrate how it can be styled in different contexts. This variety is why I wanted to make sure that the project was portable and which is why we chose to bundle it as a Composer package.

I thought it would also be helpful to give a few code resources and documentation as well. You can browse the repo we use and if you like contribute to help improve it.

So to wrap things up, there’s still quite a lot to do, even after so much time. Web Accessibility standards continue to be updated and refined. Plus, as I’ve shown, there’s at least one thing that I’ve missed and need to revise. So personally, I want to make sure that I stay informed and engaged with issues around accessibility, so I can be of service to the people who visit us.

So to finish up, I’d like to thank all of you for being here with me. And I’d also like to thank everyone involved with today’s conference for giving me some time to share this with you all.

If you have questions, I am more than happy to answer them. Let’s see if I can stop the screen sharing.

I can turn my camera on, right? Is that okay? Yeah okay.

Mike: Yes, definitely. Hey.

Adam: There we go, I did that. Hey it’s good to see you in person.

Mike: Nice to see you in person uh in the, in the Inter webs as it were. So yeah um,great talk uh very informative. What is the project license? What’s the open source license, is it MIT, GPL?

Adam: You know, I don’t remember off the top of my head. It is probably a very broad, broad license. I mean this, this was basically um this was basically we looked at the websites that we had and realized that these, beyond just ordering things with proper uh you know H1 H2 H3 right, there were some broader issues involved that at a technical level we were going to have to deal with.

And this was the first one I wanted to tackle, because it seemed like it was going to be one of the most difficult um. So I think the license is, if I even wrote one is very broad. Yeah.

Mike: Fair enough. I always just find that interesting, uh how people license things um, but anyway I’m a geek so uh. We have some questions and if you still have questions um please put them into the chat and Kayla our chat moderator will send them over to me and then I’ll say them out loud and then Adam will answer them. So our first one is, is a mega menu accessible accessibility suited or can it be made usable? If so can you describe or provide a sample?

So I think is a mega menu accessible or can it be made accessible?

Adam: I mean, I guess that would entirely depend upon whether or not it conforms to WCAG standards, right. So can you navigate the menu using a mouse and or a keyboard, uh does it respond appropriately to a screen reader, when you do so? Um so for instance my, just so you all know, my reference point for screen readers is the Mac OS screen reader.

I honestly, I don’t have JAWS or anything like that and we don’t really have to have have that kind of material where I work. But I try to use the Mac OS screen reader as best I can. I would say that yeah, you should just be very careful in the case of a mega menu, because you may inadvertently hide content that you would rather have people get to.

Um yeah, so I think that what you should do in the case of a mega menu, is try to navigate the site with the screen reader and see what happens if you, like can you actually get to any of the items? Um can you get to the items if you only navigate by keyboard? Yeah and I think that will be the determinant if it’s accessible or not, better than me just giving a broad, you know, yes they are, no they’re not, yeah.

Mike: Sure, our next question is um, how have you found the experience to be for voice command users when using ARIA label to provide the accessible name?

Adam: Uh so is the question, I guess I’m not quite sure I understand the question. Is the question, does it read the name appropriately?

Mike: I think so um I mean I.

Adam: I, yeah I believe that it reads, last I checked it did um.

I mean, it’s certainly, we don’t have anything that responds to voice commands uh in these menus. Um certainly I, I don’t have the background to handle that. Although if it’s something that I need to do, I can certainly start looking into it. But as best I know, uh and the last time I checked, yes they, they all read correctly.

Mike: Okay. And our last question, and if anyone has a last minute question, please put it into the chat now and we’ll try to get to that um uh is, menus seemed um a web feature that hasn’t gone away over time.

With Skip NAV and one-page sites becoming more prevalent, do you see them being phased out for simple more accessible experiences?

Adam: Oh that’s actually really interesting um.

So we definitely use Skip NAV. Um I’m not, I have no information about how many people actually use that. Um as far as single page sites go, uh you know I think there’s a big difference between literally a single page which is like a brochure and a single page application site, which is a more larger scale or something like that. Um in the latter case or in the in the former case rather, a single literally single page, yeah I’m not quite sure why you would necessarily need a navigation.

Um you might just have like some links at the top that would jump you up and down. In the latter case, I’m not sure. I think that some applications in site are so complex that you wouldn’t necessarily need them. I think the only thing that I would say is, lots of people now get to sites just by Googling the information they want.

Um and then my intuition is that if for some reason the content doesn’t make sense, or is structured in a bizarre way, then they might need to use, uh more navigational elements in order to find what they’re looking for, after the fact.

Mike: Sure, okay. Uh well um again I just want to thank you so much Adam for sharing your expertise. Are you, you listed your contact information on the slide and if you want to get those slides you can go to wpaccessabilityday.org to download all of the slides and find out information about all of our speakers and our wonderful sponsors. Please continue the conversation on Twitter. Our hashtag is wpaccessibilityday and #wpad2020.

And our Twitter account is wpaccessibility.

Plea, at the 2 o’clock UTC hour please join me um and Christina Workman and she’s going to talk about Accessible Websites Benefit Everyone. Um and again, just a big thank you to our chat moderator Kayla um. She is uh stepping away and we’re getting a new chat moderator at the next hour. But again thanks Adam and thanks everyone for watching and being part of WP Accessibility Day 2020.

Adam: Thanks, thanks Mike. Thank you very much.