>> ANDREW OLSON: Hello, everybody, and welcome to our presentation, live captioning: Make your next event accessible for everyone. Slides are available on the MidCamp.org website and the session page. Qymana, do you want to take this?
>> QYMANA BOTTS: Yeah. A little bit about us. Andy is a front end developer from the Chicago suburbs. He's been working with Drupal since 2008, and fun fact about Andy, he is an excellent musician and he played in a band at Lollapalooza, a pretty big music festival in Chicago. So, super cool.
>> ANDREW OLSON: It was a lot of fun. Qymana is also a musician, but she is a back end developer from northwest Indiana and works for the Nerdery. She is a certified professional in accessibility core competencies, so she has a certification from the IAAP. Just means that she is awesome and knows how to help people online for accessibility purposes.
And the fun fact about Qymana is she proudly ripped her wedding dress playing laser tag at her wedding reception. So I think between Lollapalooza and laser tag, that pretty much sums up a lot of what Qymana and I are about and the fun we have.
But, what we're here to talk about is live captioning. The agenda is, we're going to go over what is live captioning, we're going to talk about the benefits that are beyond the event, and we're going to talk about the next steps for the initiative and how you can help.
>> QYMANA BOTTS: Okay. So, what is live captioning? It's basically a speech to text tool that will take the, like, spoken words and convert them into text. It uses Chrome, and it supports the web speech API, which is, again, mechanism for converting speech to text on the web page. It uses Google servers to perform this conversion. It sends the audio to Google and doesn't send it directly to the page itself.
It also sends the domain of your website using the API, your default browser language and the language settings of the website, but it doesn't send cookies.
It is so the tool uses it captures the speech as text, but it also can do it as SRT, the SRT file format. That's basically a file format that's used by different video playback programs that contains subtitle information, which is it includes the sequential number of the subtitles, the start and end code, and the subtitle text.
You can use these SRT formats as captions on YouTube videos or whatever your preferred hosting video service is. You can also post the transcripts to the website. So the live captioning tool provides both the, like, plain text transcript as well as this SRT format.
>> ANDREW OLSON: On the screen now, you see a layout of how this would work at a live event. What I'm going to do now is talk you through this graphic, and this is how to bring live captioning to any event. But this is in the case of what would have been the room we were in at MidCamp, but this can be for any event and some modifications can be done, because you can use this anywhere for everybody.
But starting, what I'm going to do is focus in on certain things. So, is everybody seeing the focus? Qymana, can you see that on the screen?
>> QYMANA BOTTS: Yes.
>> ANDREW OLSON: Great. So what I'm going to start with is the really important person, the presenter. And first and foremost, you're going to need a microphone for the presenter. You're going to want to capture the audio and get a strong signal for them. Because they are the one giving the providing the speech, and what that's going to do is we're going to follow that down here to the next most important thing, is a computer that is dedicated for live captioning. What that computer is going to do is it needs Internet access, and it's going to go to a website, and we already have the tools set up at lc.MidCamp.org, and we'll show that in a second.
But basically, this computer is connected to the Internet and it's on that page and it's capturing the audio from that speaker.
What's then going to happen is out of that computer is it's going to go over here, which is a monitor for live captioning. So it's going to go to a TV monitor that is set up in the room next to the presenter screen, and as the words are happening from the presenter, it's going to be speech to text translated by the computer and displayed on this screen in a format for everybody to see.
The next really important part of this is point D here is a reserve section of the audience. So, as you have a room, like a breakout room or any event room sorry about that as you have that room, you want to have a section where people can view the monitor and also have a great line of sight for the presentation that's happening at the same time. So the most important part about going back to point C here is that it's different, it's not on the slides of the screen of the presentation. It's a dedicated area that just displays the text of the speaker to the audience.
Going over here is another really important piece of this is the volunteer that's manning this computer here. Really, they're just making sure that they're able to see the words on the screen and that everything in the room is accessible and helpful.
So if there is an event where the audio signal is dropped, there's a case where the captioning tool will get disconnected, and so this volunteer that is there to just watch and make sure that everything is okay, and they can reconnect the signal, the audio signal, so speech to text can happen again and it can flow all the way to that monitor.
The next thing is just the other point here is that for point F, computer for the presentation, what we want to just show here is that the computer for live captioning is different from the computer for the presentation. It doesn't have to be integrated. The presenter doesn't have to worry about it, download anything. What's great about this solution is that they can do the best presentation possible, and that's the final point here, is that there's a separate screen, so the presenter is able to present and the tool is just capturing the text, and really helping these people out that are wanting it and needing it in order to understand the presentation the best.
And the final point too is for any event in any room to be accessible, you want to have some room audio. So, whenever you sign up to speak at an event, it's always great to ask, is there going to be captioning? Because what we're talking about today is this can be used at any event anywhere.
But the second thing is, you want to make sure the room is laid out and accessible for all people, so these are great things to ask as a speaker. So you go into it and just having good room audio is really important for people in the back to hear, and making accommodations for people by reserving that section up front. Line of sight. It's really these small things can add up to really make a wonderful great event.
So, we can take questions at the end about the layout of this room. Hopefully by going A through A, you can see the different pieces of this and how this fits.
So, let's talk about this part of it, the live captioning computer, and really what's going to be displayed on the screen.
So what I'm going to do now is play a video, and I encourage you afterward we have links, but it's actually at the top here. LC, which stands for live captioning,.MidCamp.org. I'm going to play a video, and you'll see it in action. And I'm going to describe it as we go.
On the video, I'm clicking how to use, and this is a brief explanation at the top telling people how to use it. But it's as simple as clicking that button, and then allowing the microphone to pick up the audio.
What you're seeing on the screen now is using a Chrome browser, I'm able to allow the microphone to pick up my audio, and I just made this short video by speaking and introing the talk we're giving right now.
When I'm done, I can just click on the text, come back and hit refresh, and you can see that the session is captured. And if I click the view selected session transcript button on the screen, you can see it captured it and is using a time coding format.
I can then click that button and then go back in and continue that session, so you can stop and start as needed. And what you saw there is it said Angie Olson and Qymana was misspelled. So what you can do is click back out, come back and view the selected transcript again, and it refreshes.
And what I'm showing you here is that live captioning is not perfect, but this tool allows it to capture that text in the browser, present it in this way I'm going to pause the video here, and so it takes all those words and it captures it in the browser. It's not being sent anywhere. We're not sending it to [email protected]. There's no log in. It's all on your Chrome browser. And all this is doing is it's accessing this transcript on your browser.
And what's cool about this tool is it's capturing exactly what's on that screen at that time, and you can come in and make corrections, and this can become a transcript, but also with this timecoded format, you can see I could upload this to YouTube and sync it with a video and these captions, SRT is a format that YouTube loves to sync the transcripts with the video at that certain timecoded time.
So, let me back the video up a bit. I'm going to jump to this part here. So, this is what is going to be on that monitor for the room for everybody to see. And what you see here is we've made the choice to have a high contrast of a black background with white text on there. Depending on the room, we are able to increase this font size or decrease the font size, and the point there is you need to know your speaker and understand how fast they're talking.
And also, just where the monitor is placed in the room. So it's kind of a judgment call by that volunteer and by the person setting this up to have the best experience for the audience in order to support the presenter and the text that's on the screen. So all those adjustments can be made via the tool. This is a great volunteering part where we're going to work on it on Contribution Day, on Saturday, where this is a little bit more visual, maybe a interface over here to increase it in realtime.
The way we have it now is a person is able to adjust that via CSS, but you aren't currently able to adjust it on LC.MidCamp.org, but that is something that we're working on, and we're hoping to add shortly.
But, once again, we have a way to just anybody can go to recap briefly, you go to lc.MidCamp.org, you click the "click to caption" button, and you are then able to allow your microphone and everybody, as soon as this session is over, can do this and play with it and have a transcript right on their own in their Chrome browser ready to happen.
So any meeting that you have, even a Zoom meeting, as long as you can have the audio and hear that audio inputted, you can have a transcript and make your event accessible.
So, briefly, what live captioning is not. It is not actually how we're captioning this vent right now. So, it is not CART, which stands for communication access realtime translation. It's also known as open captioning, real time stenography, and real time captioning. The interesting thing about this talk and about MidCamp at this point is that we went from a live in person event, where we were going to demonstrate and use our tool in a room and you would actually be in that room with that layout, to now a virtual event.
And the point of this is that once we switched to a virtual event, we had the ability to have a real time stenographer, real time captioner person. So, what I'm going to do is go to the next slide, and Qymana is going to talk about something that is very important to our initiative.
>> QYMANA BOTTS: So, as Andy was saying, live captioning is, like, a totally automated thing and it's not the same as professional CART services. And we want to be clear that our goal is not to replace CART or interpreters. Our goal is to provide a reliable and no cost tool to make things more accessible where there would be otherwise no option.
Services like CART and hiring interpreters, like ASL interpreters and things like they are ideal. But, they can be cost prohibitive for events. Especially if there are no sponsors to cover the cost of those things.
Right now, McDonald's currently sponsors some of Chicago's meet ups so that they're able to provide CART services and interpreters. We are using ACS captions right now, which is, like Andy said, a professional, like, CART service. It's called Alternative Communication Services.
So, as far as the, like, cost goes, like in 2018, onsite CART would be around $150 per hour, and this is just like a kind of rough Chicago estimate. For remote CART services, the cost can be anywhere from $125 to $130 per hour, sometimes more, sometimes less. ASL interpreters are around $81.25 per hour per person. And DeafBlind interpreters can be around, like, $95 an hour or something like that.
So if you consider
>> ANDREW OLSON: Really quick, ASL stands for American Sign Language interpreters. Just want to put that in there.
>> QYMANA BOTTS: Yes. Thanks, Andy.
But, yeah. So, when you take into consideration an event like a Meetup, there's, like, a excuse me. It might not be the most accessible, from like a cost perspective for a Meetup to be able to hire any of these services. And if you're looking at an event like MidCamp, for example, where we have several sessions going on simultaneously over the course of several days, the cost of these services can be it really starts to add up. And luckily, at MidCamp, we are fortunate enough to have some really generous sponsors and just given some of the shifting that we had to do this year, we were able to use the professional CART services.
But, for events that don't have the budget, we still want to provide an option to those events that otherwise just wouldn't be able to provide any sort of captioning or interpretation for their attendees.
And so that's where the live captioning tool comes in, where it's no cost to them, it's just as long as you can pull together the materials, the equipment, then you can provide captioning for your attendees.
So, live captioning is for everyone. It's not just for people that have hearing loss or deafness. It can benefit people whose first language might be different from the language being spoken at the event. It can also just be better for people who comprehends the written word better. There are some cognitive disabilities where it helps to reinforce what is being said to see it written down as well. And also, just people who might understand seeing words written down a little bit better, and having things a little bit more clearly spelled out to reinforce what the speaker is saying.
So, it doesn't have to it doesn't apply simply to people that might have, like, an audio and auditory disability or something like that. It truly benefits everyone at your event.
So, I want to talk a little bit about the Live Captioning Initiative. So where can you find live captioning? The tool is currently live at lc.MidCamp.org. That was what Andy was demonstrating a couple of slides ago. And that's, like, a place you can go and you can see the tool in action.
We also have a website, www.live captioning.com. It's where our initiative things live. There will be more blog content and stuff there as well, and also, we will be our aim is to, like, also post captions as we get them with the tool. We also have a GitHub project where you can contribute code and things of that nature.
And also, you can support the project by going to opencollective.com/live captioning initiative to offer some sort of, like, financial support, any donations, and stuff like that.
So, we're kind of all over the Internet. Yeah, that's where you can find us.
>> ANDREW OLSON: Let's talk a little bit about live captioning. Like we said, we're not trying to replace CART and it's great that we're able to provide CART in this Zoom meeting right now, but we fully understand that live captioning is not perfect. There's going to be inaccuracies with speech to text, as laid out on the GitHub site, you know, there are some issues. One of them is a timeout and refresh. So as I said earlier, if you don't have a strong audio signal, so if you don't have that strong audio signal, it can time out, and that would allow you you would then have to click a reconnect button to pick up the sending of the audio to Google, and to get the words back.
So, inaccuracies, there's potential for time out and the need for Internet. There's also an issue that we have that we're going to be working on on Contrib day, is text extending off the page. If you have somebody that's talking really fast and don't take does not take a breath, the text could extend off the page. So that is something that we recognize and would love to work on.
The last is just also having the browser remember the microphone settings. If you view the LC.MidCamp.org site with an incognito window, it's not going to remember you and your microphone settings. Also we found and the reason we host it on the MidCamp site is because if you just downloaded this and just had it on your device because it's just HTML and just needs Internet access, it does constantly ask you for it as a it doesn't your browser is trying to protect you by saying this is not a trusted source, and continues to ask you that, which can be a little bit annoying and this is something that we learned last year at MidCamp.
So these are the small imperfections and things that we're working on, because we recognize that live captioning is not perfect. But I also want to point out that open captioning is not perfect either. So, for example, I was at the Chicago accessibility Meetup, and it was a great session with Marcy Sutton and Derek Featherstone and a panel discussion, and this was actually captioned by ACS, and so I'm sitting in the room, the captions are up on the screen, and there was a moment where Derek Featherstone was talking about color blind simulator and a contrast checker that could really help UIUX people as they're working in sketch. And he said the work stark, and if you know Derek, he's Canadian, so it came out stork or stalk because of the long vowel.
So, in the transcript, and on the screen, it said stalk and not stark. So, the color blind simulator and contrast checker plugin is excellent and it's called stark, but I had a little bit of trouble figuring it out afterward, and looking it up.
So, it's just one word of the whole transcript, but the whole point of that is that there's going to be more inaccuracies with speech to text with our tool, but there's also just with human nature and people speaking and understanding them, there's going to be some error.
>> QYMANA BOTTS: So, why would you choose live captioning? Well, first of all, it's free. Like we said before, you can all you need is the equipment and the tool itself is offered at no cost. It's also integrated with the Drupal recording initiative. So if your Drupal event is already going to be recorded, the live captioning tool has been tested with the recording initiative setup. So that is like a quick win, a sort of easier add. If you already will using the Drupal recording initiative.
It also lowers barriers for attendees. Like we said before, it helps people with all abilities to better comprehend what's going on on the in talks and stuff. So definitely lowers barriers for attendees and makes your event more accessible for a broader range of people to attend.
You can also get transcripts for your YouTube videos, which, again, goes hand in hand with the Drupal Recording Initiative, but also any other recorded thing, you can have a transcript for, and just very simply upload your transcript.
And also, when you post those transcripts on your site, it can increase your SEO.
So, we just wanted to showcase very recently, there was an event in Denmark that actually had stumbled across the live captioning tool and utilized the tool at their event, which was super cool. So it's already being used and there were people at that event that said, you know, that the tool will help them, like, a lot.
So, it's already being used and it's open there and it's just really exciting that, you know, there's some real world, like, practicality to the tool, and we're super excited about it.
>> ANDREW OLSON: So how do you get this at your event? So all you need again, we had the diagram, but what you will need is you need a computer with Chrome Version 25 plus. Our tool only works with Chrome, which integrates with the Google speech to text API.
The next thing you need is Internet access. It has to go out to Google to get the speech to text back, so it can be on your screen.
You also will want an external monitor, so you can have that next to the presenter and the presentation screen, so people can focus on that and have their eye go back and forth from the slides that are being presented.
You also will want a microphone for the presenter. We've been saying you need a really strong audio signal for that to get the best speech sent to Google to get more accurate text back.
And lastly, very important is to have a volunteer. We want to take the burden off of the speaker, as we've shown in that. In the diagram, we want to not impact the person that's not presenting and have them make accommodations. What we want to do is make sure that there's a volunteer that is focused to make the event accessible. And their job is to get things set up and help others out, and it's really just going to help the presenter in the end and broaden that the people in the room, we're helping them that day, and it's going to help facilitate a conversation because everybody in the room is going to have a better understanding of the spoken word.
But after that event, we're going to have a transcript that we're able to post to the site. If somebody misses that event, they're going to be able to read through it and get caught up, and if we're lucky enough to record that event, we're going to be able to see more accurate transcripts on YouTube, or whatever video streaming service you use.
So, we hope that this outlines, you know, how you can make your event more accessible using the live captioning tool.
This is a big thank you slide. What's interesting to me is that live captioning this is actually its birthday. What happened was last year at MidCamp, Fatima was the keynote at last year's MidCamp, and if you don't know Fatima, she's great and she wants to be as inclusive as possible and asked if we had captioning, and she brought to us this code pen by David Rupert about how you can just have an HTML page use the Chrome browser and microphone and get transcripts on a screen.
So this wouldn't have been done without her asking that question, without Dave Rupert providing that basis of the code, and we were able to last year at MidCamp cobble together within just a handful of days right before the event to roll in some monitors on carts and have a volunteer sit at a computer and caption as many rooms as possible.
And the next person I want to thank is Burton Kent. He's a hearing impaired individual that joined MidCamp last year, and directly benefited from this tool. And he gave us real feedback. At one point, we had the live captioning tool in I believe two rooms, but Burton wanted to go to other sessions that were not in those rooms, and so we pivoted, gave him a laptop, and a USB microphone, and he was able to take that computer, walk into any session room, set the microphone up by the speaker and had the laptop in his lap and was able to participate and contribute during MidCamp.
So, we've come a long way in one year, and it's to the thanks of all of these people. And I have more people to thank. Bounteous and the Nerdery for allowing Qymana and I to work on this and just continue to fuel the passion that we have for getting this tool out there for everybody to use.
Glenn Blicharz is an outstanding friend and developer and is the person that figured out how to capture the speech to text on the browser and have it where you can actually go back and look at it. Where we were a year ago is the text as it left the screen, it was just gone, it just went into the ether and it was not captured. Glenn did what he does best, great friend in development, and was able to write that to the browser and then help build an interface so people can look at it in a timecoded format and regular text. So that was a huge step forward for us in the tool.
Kevin. Kevin Thull. If you don't know him, he is just a great all around person and does the Drupal recording initiative. He and I just sat down and we're able to use the microphone that he uses for all of his Drupal recordings. The gentleman was in London just last week, and our tool can work seamlessly with his equipment. So we're still working with him to get that out there in the world, but it's great to know that that is happening.
The next person on this list is Mike Gifford. I was lucky enough at the last Drupal Con to just randomly pump into him. If you don't know Mike Gifford, he is or was the accessibility lead for the Drupal.org project.
And just by bumping into him, had a hallway chat and got his great ideas, he really saw the value and the potential for this. And it was something that was kind of novice and a great thing for MidCamp, but Mike Gifford really helped me see the vision of how this can help everybody everywhere. So, thanks to Mike.
Next person on this list is JD Flynn. You know, he is an outstanding developer. And has great ideas about how to take this tool, you know, a struggle with this tool is the inaccuracies as the text comes in and misspellings and misinterpretations of what the speaker is saying.
So JD has done some great work building and tweaking with React about a way that a person a room monitor could potentially sit in that room and make corrections in realtime so that can happen on the screen. So, thanks to JD.
And finally, Karlyanna Kopra from the Drupal Association reached out to us. She heard about the great things of what we're doing here and I gave a talk at Drupal Corn in Iowa and she reached out to us about getting this message out to the Drupal community, which leads me to the next slide and the next steps.
So, totally understand with where things are now about Drupal Con and what is going to happen with Drupal Con, but we were excited to have our accepted and excited to the Drupal Association for actually we were going to have one room using live captioning for both of those days.
We, like everybody, our hearts and thoughts go out to the Drupal Association and looking to see what the fate is of DrupalCon. Regardless, Qymana and I are committed to getting our session up and out there in whatever format. And, you know, we'll be very disappointed is that the decision is that we can't caption that room at DrupalCon, but completely understand, and just want to get this message out.
So, to summarize that, Karly has been amazing at helping us craft our message. She was the inspiration for that diagram, because it really helps her understand how we were going to do this at DrupalCon. It helped us, how we communicate with the AV people, how to make this event accessible. So she's just been helping us ask great questions and driving great content actually, pulling great content out of us, because Qymana and I are both developers and she's really exercised the writing part of my brain to take this initiative out into the world.
More next steps. We love all things captioning, so we love to evaluate other captioning tools. MidCamp was going to be an in person event, as we all know. And now it's virtual. But at the in person event, there's an outstanding tool called Thisten, and you should definitely look them up. And if you do have an in person event, their tool is great. It is an improved upon speech to text API that is more accurate and has some learning behind it.
So its accuracy is much better. They also have the able to have a person in the room to make those corrections in realtime and broadcast it, those corrections, to the screen in the room.
Not only that, but the Thisten tool also allows anybody on the Internet to virtually attend your event and see the transcript, so they may not be able to attend, but as the transcript is happening, it's on their website.
And I visited their site this morning, so they've pivoted a little bit because their tool is really a direct competitor to live captioning, and as I always laugh at being the competitor for a tool that helps everybody, especially the accessibility community.
But, you know, ours is a free tool, theirs is a paid tool with all those amazing and outstanding benefits.
So, the point there is that they've pivoted from in person events, they have a way to upload audio, and so they're able to take podcasts, and you can upload an audio of any podcast and it runs through their speech to text system, and provides really good, really accurate speech to text transcript that then can be given to the person that uploaded for the podcast, and then they can make any corrections. But it's a lot more accurate than our tool. So they were an in kind sponsor for our in person event, and unfortunately, since we're not a podcast, they don't provide like what ACS is doing, the real person, real time CART transcribers.
So unfortunately, they had to back out. But, you know, they're a great tool and great people and have been outstanding to work with and we're disappointed we weren't able to use them for our event.
What else is next for the Live Captioning Initiative? ? We were able to work with the Chicago accessibility meet ups. If you haven't had a chance, it's an outstanding Meetup when you're in the Chicago area. But we were able to have our live captioning tool happen at the same time as realtime captioning, which is actually ACS.
So I have a blog post that's waiting to be written, but was able to see how accurate our tool is compared to a realtime captioner. So that's going to be coming soon on our live captioning.com website.
The other important thing that we're doing, and this is something that we would need help with for Contribution Day, is adding transcripts to the MidCamp.org website. What we can do is use our tool to create transcripts from any event that's been recorded and any audio, and we can add those to MidCamp.org to really increase the reach of our session and our pages, because all of those great words and keywords can really help the website and also just help any visitor that doesn't want to watch the video, and they just want to scan through the conversation that happened.
Virtual meetings. We're living in a world in the short term at least, as we're not getting together in rooms as this tool is outlined. One thing we're going to look at is seeing if there is an integration with Zoom. Right now, we have an integration with Zoom with a real live captioner, so what we're going to do is just put a bunch of smart people in a room and see if there's a way that we can use our live captioning tool so Zoom can have captions using our tool.
And last we have on the list is view on your own device versus in room monitors. That's what I was talking about with JD Flynn, allowing the people to correct as the speaker is talking. And, you know, allow anybody to it's basically trying to take our tool and make it more like the Thisten tool. So we'd love to be able to see if that's something that we can add to our tool to make the free tool even better.
All that was leading toward the Contribution Day Saturday. We'll be around. This is the way to reach myself and Qymana. We're also on the MidCamp Slack channel @captioning, and just love to talk about this stuff and love to talk to you more and hear more ideas to see where this free tool can go.
What I have up here is the links once again to our website, the project on GitHub, the free tool, and our Open Collective, which we're still figuring out what our open collective is and what funds we would use. So there are things that we'll put brain power on Saturday towards.
Two more slides. And then we'll open it up for questions. Just provide feedback. Thanks for everybody. On the MidCamp site, there will be a way to give feedback, and also please join us on Saturday for Contribution Day.
I'm going to back up to the links here, and I think we can open the floor to questions, unless, Qymana, do you have anything else to add?
>> QYMANA BOTTS: Nothing from my end. Just yeah, thanks, everyone, for attending and also, again, thanks to all of those amazing people that helped us kind of get this to where it's at now and are, you know, guiding forces as we continue to develop this tool.
>> ANDREW OLSON: I can probably go over to see if there's a if there's questions in the chat or the channel. I don't see any in the channel. I have my phone up.
>> I've asked people to unmute if they want to talk to you. Any participants who want to speak, you can unmute yourself now. Okay. We're just in awe.
>> ANDREW OLSON: It's a lot.
>> This is the presentation that I was flying to Chicago to see. So, I'm really
>> ANDREW OLSON: Oh, my.
>> So I'm really happy to have this information. I'm thrilled to see Mike Gifford is a friend of mine from Ottawa, and
>> ANDREW OLSON: Oh, great.
>> So, it's nice to see people I know involved in this project.
>> ANDREW OLSON: Well, that's really humbling and awesome and would love to hear more about how you want to use it.
When I spoke about this at Drupal Corn in Iowa, there was a lot of academia there, and one person raised their hand at the end was like, I can just use this tomorrow for meetings, and a lot of people, them being the ones that managed the website, they kept getting asked by faculty and staff about how to do something like this. A lot of people have used Google slides and had it over to the side, which uses the same API. But people are getting creative. This is just a way it's a free website to just go to and have anything you know, to capture anything.
I have personally used it as well as I've been writing blog posts and practicing speaking for even this presentation. You can use the tool because what it allows you to do is just see how well you're speaking, how fast you're speaking, how slow you're speaking, and how well you're being understood.
So the Google API can help you slow down, become a better speaker and better presenter. And the other part of it too is a lot of times I'll just talk to somebody about an idea, or as I'm writing something, and then I'll say, what did I just say? And I struggle a bit to write it and type it out, so it can kind of just be something that you can talk at when you're talking through an idea or a concept. It's great for documentation.
If you're trying to describe something like how do I open this or go to this certain part of a website, it can allow you to just kind of talk through it, and then not have to type it out. So those are some other ways to use this tool.
>> Yeah. I see we had a user who had unmuted, but now they've muted again. Charles, did you have a question?
>> I guess I can ask a question. This is Mike.
Just wondering about the contribution sprint. Could you explain more about that, like what we can do, and if we need to do anything in advance, like download software or anything?
>> ANDREW OLSON: Great question. We'd love to have you. Probably the best place to start is the GitHub site. You can read through the documentation of like how it works. You can download the tool and play with it on your own.
The other part of is a projects part of it that I'm going to clean up. I was focused on getting through this presentation. But between now and Saturday, I hope to outline a few more cards to make it easy for people to explore. But there are a few cards out there on the GitHub site.
And let me actually I can bring that up and kind of go to it. I'm going to drag it down here. I'm not sure where that went.
>> Also, will there be stuff to do for those who might not be developers?
>> ANDREW OLSON: Definitely. There's writing would be great. Here we go. Is everybody able to see this now?
So this is the GitHub site. So, MidCamp live captioning. You don't have to be a developer. You know, some of these words I'm describing could be helpful. I'd love some help with some of the blog posts. I do have both of those transcripts. So, long answer of yes, I'd love it if you don't have to be technical to join us. If you click here up top of projects, we also are a newly minted website, so we'd love some help with our website, which is in Jekyll. So we'll identify some cards on here in the project. But just a few live captioning enhancements in the to do column is the one that I talked about, the ability to adjust text size background colors.
This is, again, from Mike Gifford. Said, oh, maybe it's something like this where a user can adjust the font, they can change the size. So I would love this to be a part of our tool to give the power to the person that joins live captioning. It's a free tool. So, this is a big one that's high on my list that I'd love to allow in our free tool.
Allowing users to clear the text. Just some of our interface needs probably a little bit of work. But I'm wide open to suggestions to help make this tool better. Does that help? I think that was Mike. Yeah?
>> Yeah, that was great. Thank you.
>> ANDREW OLSON: Of course.
>> Okay, we've got two more minutes until we have to release the room. So if anybody else has anything to say, now is the time.
>> ANDREW OLSON: Here's a site. FAQs is empty, so it would be great to bust out some FAQs. The biggest thing for people to understand is that it's nothing to download. It's nothing that you need. Anybody can go to the site and start using it.
So it would be great to get some help out here. You know, it's as simple as beginning to speak. So here it is. Session 13. Obsession 13. Yeah. Not session 13. I am a mumbler, so this does help me speak more clearly and slowly. And then you can click here for the text.
Well, we'll take the conversation to we can use that room chat in between, but highly suggest you join the MidCamp Slack channel for captioning, and we really hope to see you Saturday as we figure out how to take this tool and just make it even better for people and share as we go. Really appreciate your time today.
>> QYMANA BOTTS: Thank you.
>> Thank you so much.
Qymana, are you still there? I saw on the spreadsheet that you are hosting the next session in this room.
>> QYMANA BOTTS: Yes.
>> ANDREW OLSON: I can still around too and help.
>> I was going to say I don't have anything in the next session, but I can either pass it along to one of you.
>> QYMANA BOTTS: Yeah. I wasn't sure if I wanted to I was avoiding hosting because my Internet service provider is a little questionable. But I was going to do the room monitoring.
>> Yeah. Okay, well, if you want to do that. I don't think like, in terms of the hosting versus monitoring, it hasn't seemed to be really, you know a big deal. So, I've just got the presentation up on one like the actual Zoom on one screen, and then the all the chat and Slack and participant list on another one.
>> ANDREW OLSON: You stopped the recording. How did the captioning go? If anybody is still on, just wondering if that went well?
>> QYMANA BOTTS: The captioning looked good to me.
>> ANDREW OLSON: Great. Thanks, Kacie, for being our captioner. The next event starts at 11:00; is that correct?
>> QYMANA BOTTS: Yes.
>> ANDREW OLSON: Cool.