>> SADIA RODRIGUEZ: So I was talking to Doug as we were setting up this room. And he made the point that it would be helpful to explain to the people with the pursestrings why having a QA resource would be really helpful and I hope that the things that I talk about during this hour will give you talking points. And even give you the opportunity to gather data on kind of the QA costs that you're incurring as a team so that you can make that argument on a Return on Investment type perspective.
So I'm Sadia. I -- it rhymes with Nadia for those who like that mnemonic Sadia Rodriguez I'm in round rock which is just north of Austin, Texas I work at Bounteous like a lot of people who are at this conference and been speaking it's been pretty cool to see my co-workers in a different virtual setting than usual I'm a quality assurance analyst by title and definitely at this point most of the work that I do. But I also fill in on the requirements management side of the house as a business analyst when there's a greater need for that.
I like to stay really busy so this whole stay in your house thing is very challenging. I'm a first Lego league coach and judge and in fact, I'm trying to figure out why the team broke their robot. So that will be one of the things I will be video conferencing about this weekend.
I also volunteer with Girl Scouts and I'm a STEM coordinator and a bunch of other things but for this audience a STEM coordinator is the most interesting I volunteered at Hour of Code at the event this December we served 86 Girl Scouts I'm a cat person, I love dogs, too, but I'm a cat person and I volunteer with the galaxy project at the local shelter at the Williams County animal shelter as I said I have 13-year-old twins, it's so dramatic at my house right now and I'm really, really, really good at procrastinating and not always great at fixing the effect of procrastination. So there you have it. And the way I got into QA is I used to be a mainframe programmer.
So essentially I was -- and I'm not quite as old as that sounds. But you know . . .
I was working at a state university that had much of its administrative software on the mainframe custom built. And was on the same system for a long time. And kept saying, hey, you know what, we need QA we really need testing we shouldn't have had that bug in production. And finally my management got annoyed with me and said, you think we need QA? Then you be QA. So I did. And then I found out I love it and now I continue to do it.
But I suspect that some of the folks in the virtual room are in a similar situation, if you're at this talk, where you would like to perhaps have more quality assurance than you were budgeted for and would like some solutions.
So I would like together a feel for the room if that's okay. If you can just tell me, just in a few words, either what your role is or what you're hoping to get out of this talk or maybe even a little bit about what the QA landscape of your team is. Do you have a QA person? Do you have part of a QA person?
Just let me know in the chat and I'll give people a few minutes to let me know what it is that you're looking for.
>> DOUG DOBRZYNSKI: Sadia, if you would like and people are willing to talk voice, I can unmute individually and let you know.
>> SADIA RODRIGUEZ: Sure. Are people -- would people prefer that or raise your hand I think is an option and Doug can unmute you when it's time.
All right so we have one I.T. manager without any QA available. And as you probably already picked up, I'm not a short speaker. I'm very long winded so Twitter is not my happy place at least as far as putting out output but I will definitely share resources as they come across my plate.
Oh, I know about threads. Thank you.
Okay. So we have someone who has some QAs. That's awesome. Team of two with no testing or QA and you test your own stuff. Been there, done that.
DEV test each others work and I think that's the normal people -- way people normally get to the point where it's like you know it would be really nice if we had QA. Keep them coming I'll keep my eye on there but I'm going to go ahead and move on because I have 50 bajillion slides and I have stuff to say. So what you can expect from me is I'm going to, first of all, make sure that we're all on the same page as far as what we mean by quality assurance. My definition personally is a little broader than I think is traditional. You don't have to share it. But I want to put out there that I see QA as something bigger than just testing. I'll try to give you some ideas and these idea slides have a little light bulb in that so ideas for how to get QA done when you don't have a designated QA person and I'm hoping you'll come away with at least a base -- a basic idea of how to plan and execute testing when you're fairly lean team and I mean that in the colloquial sense not the technical lean process type thing.
But when you just don't have that many people and you need to get stuff done.
So what is quality?
I know it when I see it. Well, that's a perfectly lovely pat answer. It's not very helpful.
So specifically when it gets to software quality, you can throw a whole bunch of stuff in there. I think what we tend to think of first and foremost is does the site do what it's supposed to do? Which is one hopes conformance to requirements because one hopes you have requirements. I used to work with a really great guy named Chris who -- and this was back when we didn't -- it was at the state university, we didn't really document requirements. We don't have QA. Everyone was a developer.
And he made the wonderful argument to me one time and it stayed with me that if you have self-documenting code and no requirements, then there are no bugs because the code is always doing exactly what it said it was going to do.
So I'm a huge fan of comments and code and making sense of things. And writing good code.
But I do find it to be helpful to have requirements that are somewhere outside of the code.
But there's a lot of other things that go into quality assurance issues if you're collecting data or displaying data, data integrity and data accuracy are important and the distinction there is is the data integrity, is it internally consistent? And then data accuracy is does it reflect whatever world it's supposed to reflect up to date and accurately.
Usability is very important. Because it's great to build the prettiest, most functional site. But if a user can't figure out to do what they need to do, it doesn't help them.
There's a lot of things on this slide I'm not necessarily going to get to all of them. Because that -- this is -- I have given this as a four-hour class before so I'm really trying to bring it down to an hour. And the slides are available on the MidCamp site.
Come on. There we go.
Okay.
So from my perspective, I feel like quality applies to every part of a project. Or a site. So yes, the code and the functionality, of course. But also in my opinion to the process to communication and documentation and just all the pieces that come together so you can work as effectively, efficiently and happily as possible.
So testing, huge part of QA. But I personally don't see it as the only part of QA.
So with that in mind, knowing that quality is all the things, kind of like -- and oh I was at a talk yesterday and forget who said it. But it's kind of like accessibility. Like we have come to a point in our understanding of the importance of accessibility where we really do understand that everyone -- it's everyone's responsibility and I feel like quality is sort of in that same thing.
So again, the pat answer is who is responsible for quality? Whoever -- you know, the quality engineer, the quality assurance manager.
That would be the wrong answer. Just you know so you know.
I would say that everyone is responsible. But you need to have ownership. Right? Like at my house when I say everyone in this house eats food therefore everyone in this house is responsible for washing the dishes, guess how many dishes get washed? None of them. Right? So there needs to be ownership.
So you can have everybody having a stake in having responsibility but you also need to have designated ownership to make sure that things happen so that you're meeting the quality standards you need.
So the first thing that I would say is if you don't have a devoted QA person, just make sure that you identify what your QA tasks are that need to be done on a project or a team or a site. And distribute those tasks among the members of the team.
So here is a short list. Your mileage will vary, your needs will vary because different projects different sites have different needs. But here are some basic starter ideas that you can distribute amongst the people on your team.
So planning the testing.
And then executing the testing.
Right? Those are the big ones. Let's figure out what we're going to test and how long it's going to take and let's actually do it. But also bug documentation and reporting.
So ideally the person executing the task is also going to document the bug but maybe that's just not the strengths that you have and maybe you have people who are parent testing and one person is better at actually navigating things and another person is better at documenting.
So you know choose how it works for your particular team.
You need somebody to actually look at the bugs that have been documented and any improvement ideas that have come up and prioritize them. Usually that's the Project Manager but some projects don't have managers.
Test data creation can be its own task. Writing unit tests. I think that goes alongside coding. But you know perhaps you're a test-driven shop and you do things in a different order. Whatever works for you.
Test environment and framework setup. That's something that people sometimes forget when starting a project and it's important to think about how you're going to set that up and who is going to do it. If there's going to be any automation, who is going to automate it, who is going to maintain those tests. And who is going to review the output every time you run the test. And then document any bugs or improvement ideas. Process evaluation improvement, I would argue is really a place where everyone should have a voice. But you still need someone to own it and drive it and make sure you're having those discussions.
If you are an Agile shop, then you may very well be doing this regularly as part of your retrospectives and not even realize it's a quality think but I put that in quality because you're getting better and then of course code review I assume in 2020 we're all doing code review but that's a big part of QA, as well.
So before I move on, are there any questions here? I'm watching the chat.
All right.
Sorry; I keep having to move my chat window around because my content is placed in different places so ultimately there won't be a one-size-fits-all solution I wish I could just write a test plan for you all and just be like here, execute this, but that is unfortunately not how LifeWorks. But what I would recommend is that as a project team, you think early and often about what your quality goals are.
Make a plan for how you're going to implement quality. Carry out the plan. But be flexible. Expand it or contract it as the need arises. Schedules do what schedules do and it's more common that you're going to have to contract what you plan to do in the way of testing. As compared to expand it but we can always hope. And then make sure that in your project planning you're taking the time to think about what prep time you're going to need and what execution time you're going to need and documentation time that comes with QA.
So why do we test. We have talked about QA more generally now I'm really going to start digging into testing.
So the basic points of testing is to make sure that what you have built is of an acceptable quality and again I'm being wishy-washy on purpose because acceptable, completely different depending on your site, your project, your customer, your users and your teams. So everyone will have a different idea of what quality means so I do recommend that wherever you are in your project, ideally early on as you're setting up the project team, but Monday, if you haven't done this yet, sit down as a team and figure out what quality means for you.
And then do your planning around that. And that way you keep yourself from testing things that aren't that important. Or spinning your wheels on something that is important that ends up taking so much time that you're not able to get to the other things that are important. So the goal of testing, verify that your site is of acceptable quality.
So how do we go about test planning? Again, there is no one-size-fits-all solution.
So there's a reason I duplicated this slide. I feel like it's really, really important for you to take the time to think about what your goals are.
And so okay we have amazing design people at Bounteous. I did not go to them because I'm a master procrastinator so please don't judge them this is a me image.
So all you project managers out there rolling your eyes yeah we know there's a project triangle you can have it cheap, you can have it good you can have it fast, the good part is the quality part so it is going to cost money and it's okay for a project team to say, you know, it's more important for us to get this done fast than to have it relatively bug free. That is acceptable. It hurts my QA heart. But it is acceptable.
But make sure that everyone on the team is on the same page. So be realistic about what you can test and how long it's going to take. Don't bite off more than you can chew.
So that's great and all. But how do I strategize that is the question.
So I recommend that you be aware of all of the things that you could be testing. So go back to that list that I had earlier of all of the different areas of quality.
And then decide which ones apply to your particular project.
Like on the project that I'm working on right now, we specifically said that security testing is not within the scope of my company. We recommend that it be done. But it's not in our contract. And therefore, we are not going to do it.
And then time box. Say, we will spend this much time on this aspect of testing. And then figure out what is the most important and work your way down.
So if you run out of time, you still hit all of the high points early on.
So I realized as I was starting to talk that I have not added a link to a test plan that I would like to share with you guys.
So what I'm going to do is I'm going to go ahead and add a link to the bottom of this slide, reupload my slides so that you can get to just a PDF of my latest test plan.
But I would say, you know, write up who do you have and what are their roles. Not even necessarily what are their testing roles. But like what is their role on the team. Because you might be able to pull people into doing some testing or doing some quality work.
Decide what your high-level goals are for quality. I know I've said this three times but it's important. Identify the types of tests that will be conducted and the tests that aren't.
And then based on that, describe the testing infrastructure you're going to need, any prerequisites that need to be in place. This will include of course your test environment, any data that you need that may need to be -- you may need to have a DEV data source and a stage data source and a production data source, any testing tools that you're going to use. Talk about when you're going to do code freeze. So that your final round of testing can be on a set of software that isn't changing out from under you.
And then how you're going to handle these things. Right? Like how are you going to write your bugs? How are you going to decide which ones to fix and how are you going to decide which ones to live with?
Hopefully you find most of the bugs because you've prioritized things right so you can make informed decisions. I would much rather say, this software is going out with these 23 bugs than this software is going out and it probably has bugs but I don't know how many or what they are.
So this I have quite literally been doing since 2005 well before I was in QA when I was a tech lead I was still preparing a minimum testing checklist. And it has saved my butt many a times so I went and found one from one of my old projects at the university.
And this was a project on our public directory service.
So this is the entirety of the minimum testing checklist. These are the very, very, very, very basic things that need to work for me to have some sense that the system is okay. So one example of these five tests is search for a public record by first and last name and make sure that returns results in the three interfaces that we care about.
And I want to do that in the test environment, the qual environment and for production release.
And then you'll note the test 5 is pertinent only to production release. Make sure that my searches are repeatable from outside the network for the university. Because my test environment, my quality assurance environment, are firewalled so that I have to be on the university network to get to them. But for the production site, I need to make sure that those firewalls aren't preventing access. So to me a minimum testing checklist is super minimal. It's just -- it's smoke testing, heartbeat check, whatever you want to call it. But it's the very bare bones that says, okay, the system is running this.
All right.
Any questions about this?
All right. So how do you prioritize? How do you decide what goes on that minimum testing checklist? As with everything else, it depends. There's no one-size-fits-all solution.
For one site it might be your most visited pages. So you may be looking at your analytics data and saying, our top ten pages that users actually come to are the ones that we need to check and have the really, really, really clean look. Okay maybe you're in a different business area and what's more important is specific functionality within those pages that is incredibly high visibility.
Because you're doing some sort of advertising thing and you know that people are going to be flooding into this one page and you need that to work really, really well.
Or for more stable ongoing sites, you may need to have the most business critical items working. So maybe checkout needs to be the thing that you confirm works from beginning to end.
Apparently I thought that I needed to have the highest something but I don't remember what that was.
(Chuckles).
>> SADIA RODRIGUEZ: I think -- oh, highest revenue generating that's what I was going to say there.
And then next maybe the most fragile. Maybe there's something that just breaks all the time and you just want to make sure that that works because you know if that works, everything else is probably okay.
And then what we do most often is test the newest stuff. Make sure that any changes that we're rolling out have been addressed and fixed.
I generally keep a test -- a minimum testing checklist that's for the system as a whole. But will be one of these top lists, most visited pages, highest visibility, et cetera.
And then for a particular project, I might have its own minimum testing checklist, which is about the new functionalities. So one is sort of a regression view of the system and the other is a new functionality view.
I'll check on time. About halfway through.
So minimum testing checklist is great for deployment testing, it's great for emergencies, it's usually not enough for the whole thing. Right?
So the V model of test planning and I'm giving you like a very quick overview of something that is extensive, is essentially that we plan early for the testing that comes at the end.
And we plan in more detail in smaller chunks as we go through the process.
So you're going to start really early on looking at your requirements and going, okay, what is the acceptance test going to look like? What are -- if you're building something for a client, for example, what is the client going to look for to make sure that we built what we said we were going to. And what non-functional testing do we need to have? Do we need to have failure recovery testing? Do we need to have load testing?
What other things do we really need to plan for and think about? Because it's going to take a while to get those things ready and when it comes to acceptance testing well that's not going to happen until you're pretty close to done or at least it's frontend so you want to think about what requirements are we fulfilling this sprint and how are we going to see if they are acceptable then as you do more system design architecturey things you'll think more about integration testing. And then as you write them maybe you'll be writing the test planning or just doing the unit tests and then you'll be doing the unit testing once you have those pieces put together you'll do the integration testing and once you have those done you'll do the non-functional and acceptance test so I'm not saying that every team will be able to do all of this but what I am asking you to do is at least think about these things and actively say, this we don't have time to do.
Rather than just going and testing the shiniest thing that comes up. Because goodness knows that's tempting but that's where we kind of get blind to the risks that are presented.
So make a plan. Is my point.
Follow it. Until it needs to be updated. Then update it.
So again going back to strategy, be aware of what the test -- I am going to talk a little bit about tools. But I am -- I've been out of actual development for a while. So I'm not going to have great unit testing advice or anything. But I can certainly gather advice from my Drupal knowledgeable teammates and send you an updated list if you would like me to.
So sorry; that was a question in the chat that I'm addressing.
So time box your testing. Once you've decided what you could be testing, put time around it so that you don't go too far down any rabbit holes. And then remember to account for the learning curve. Whether that's a learning curve of somebody new, learning your system, somebody new to the project understanding where your requirements are, somebody who has never done testing before, learning how to test, just take that into account. Don't expect everyone to do things at the same pace. And that way you're less likely to regret it later.
So there's a fairly common challenge in our universe, which is that we are asked to go confirm that something works and you have no idea what works means.
This tends to be something that QA in particular deals with a lot. Developers for the most part, if you're testing your co-workers' work you've got someone there that can tell you what this is supposed to mean. But this is normal. I couldn't remember if I had written up my suggestions for this or not.
What I would say is, you know, investigative testing is testing. Go in, try things out. And use empathy. Think about the person who is going to be using the product and think about it from their terms. Because you probably have some user out there who is going to come into this system just in the view who needs to accomplish a goal so find out from your clients what goal people need to be able to accomplish with what you're building. And then just go see if you can accomplish them.
If nothing else, you're doing usability testing there. And it gives you a chance to learn the question.
So be agile. So I work in an Agile shop. I suspect many of you guys do, as well. So everything I've talked about I know sounds really waterfally. It isn't. Well, it comes from waterfalls. But what I do at a sprint level is I functionally test each story. And at the end of the sprint, I perform a set of regression tests.
Now, that regression testing is usually my minimum testing checklist plus the important things, things that changed in the prior three or four sprints so I know they are kind of moving. I will talk to developers and make sure I understand which parts of the code they will be touching so that I know what parts of the functionality could be affected.
And so all of this can be done at the level of a sprint. So when you're grooming your stories, be thinking about what testing is going to go into that. And if something is going to be particularly complex to test, you may even consider including that in your estimates. Regardless of how you do estimations.
And really importantly, all of this should be live.
As your site evolves, as you add new functionality, as you deprecate things, update your regression testing list so keep it somewhere where everyone has access to it where someone can jump in as necessary.
So who does the testing? Right? And I've heard from some of your answers earlier in the chat that there are different people on different teams. Some of you have devoted QA folks for some it's the developers who test each other, for some people you're testing your own code.
So know your team.
I love testing. Like I started out literally as a mainframe programmer. But I love that the QA role gave me the opportunity to see the big picture. Because not only do I need to understand, you know, where we might need a little bit more code coverage at the unit test level, I also get to see all of the end-to-end stuff and how users are interacting with the system. It's a really, really neat way to get to know a system.
So use testing to see the big picture. Use it as an educational tool, if nothing else.
Be fair. So just because some person is a junior most, don't dump all of the testing on them. It's not nice.
And just because somebody is really, really great at testing, don't take full advantage of that and make them do all the testing. You know, they probably -- if they are a developer -- want to develop.
So do what you can. There will be some grumbling. But try to share the work as fairly as possible. Take advantage of the talents and skills that are available to you, of course. But make sure you're allowing people room for growth, as well. And I will come back to this a bazillion times the No. 1 trait of a good functional tester is empathy because you want to put yourself in the shoes of the person who is using the thing. Right?
And so when you're testing a co-worker's code, it's not to trip them up. It's so that whoever you're building this for at the end of the day can just get done what they need to get done.
So this one. And I started my career being the person who tested my own code and that was the kind of culture that I grew up in as a baby developer.
But I don't recommend it. And the reason that you don't want to test your own code is the same reason that you don't want to proofread your own resume. You know what you meant it to say. You know what you meant it to do.
And you're not thinking about the other guy who thinks differently than you that does things -- that works through the process in a completely different way than you. We do not catch our own mistakes. We know what it's supposed to say. And so having another teammate review your code is important.
There have been systems that I've worked on where I was the only developer on that whole system. And I would like talk to my buddy who was also a developer and the only person on their system and we would check each others things. We didn't even work on it but I knew I needed fresh eyes.
All right. So test automation. I love me some test automation. I really, really, really do. I think it's such an incredibly powerful tool. But you can't automate everything.
And test automation doesn't replace the need for manual testing. Because when you manually test something, human judgment comes into it. And you notice things that you wouldn't necessarily think to automate if you didn't go in there and try it.
So I -- even when I have my test as automated as makes sense and optimally automated, I still go in and do manual tests just to make sure that really obvious things aren't broken.
So I highly recommend if you have automation resources to automate regression tests. Things you do over and over and over again. Because it saves time.
But even within that, I would caution to automate functionality that is fairly stable. Because otherwise, you're going to end up spending all of your time reworking the test which doesn't help. And if there's data issues, you know, definitely write your test in such a way that you reset the data like date specific tests, if you -- if you're coded to say like, today plus ten, that's functionality that's fairly easy to test but if you're testing something like an event calendar, it can be a fair amount of work to set up the data so it consistently plays nicely with an audience so just keep that in mind.
So test automation, not the solution. It's just a tool that is part of the solution for, I don't have QA people.
Some testing tools. And I can do entire talks about, well, each of these expect the visual testing tools because I have zero experience there but there's a reason why Selenium keeps rising to the top it's reliable it's there people know how to use it you can find resources who have worked with Selenium it works really well for frontend testing click on this button go to this place fill out this form, check for errors. It's great for that. Siteimprove. Not free. But it's a really good tool to check on things like SEO and broken links and all of that good stuff. It does also have some accessibility functionality.
I love, love, love both axe and WAVE as accessibility testing tools. Axe actually has a really neat plug-in that you can throw -- okay, Monsido I've never used it. Axe has a really great plug-in we used it on a project with protractor framework I forget what else was in the stack but essentially the axe plug-in runs basic accessibility tests on page loads so you drop it in, add a bunch of URLs and it just spits out really helpful feedback on accessibility. So I highly recommend looking into that one.
And then one of my co-workers recently has started looking into visual testing automation. So are the things on the right place on the page? Do they look right? Are they the right color? And these are the three tools ghost inspector, Percy.io and Applitools that rose to the top of his investigations. I haven't looked at these except to review his findings but I think that's a really exciting space that we can grow into as far as automation goes.
Check. All right.
So automation is not free. Tests get stale very, very, very quickly. Test maintenance takes time. Sounds like you should talk to Beatrice about Percy. And investigating test failures takes time. And so what I have seen happen time after time after time is people were a little too enthusiastic, a little too aggressive with automating tests. And then a test broke. And then they stopped running the test. Or even worse they run the test every day and nobody looks at the output anyway. So my recommendation if you're going to automate, keep it short, sweet and pertinent and pertinent means up to date.
So how do you think like a tester? Now that you've tried to convince your management that you really, really need a QA person and they are like, nope, you get to do it.
So this is my personal philosophy. I know that there are places where there's more of a combative relationship between DEV and QA I choose not to work at places like that so in my opinion a tester's job is to make the site and the team look good. So if you're a developer who is having to do testing, your job is to make the site and the team look good.
Right?
So you're helping by getting the cleanest code, the best functionality, the most consistent experience out there. It's not about catching people in a mistake. At all.
It's not about embarrassing people. Okay?
I have -- to answer Truls, is that how you pronounce it, I honestly haven't been doing too much automation recently. I've been more of the sort of strategy stuff. But I can definitely ask my automation co-workers what they recommend. We have an in-house, as I said, protractor framework that we can just drop into our projects and it's really, really easy even for our manual testers to add in tests as we go.
So I've been using that for the most part when I've been doing automation recently.
So again, with the empathy, place empathy for when you're doing end-to-end -- first when doing end-to-end testing. Put yourself in the user's shoes. What are they trying to accomplish. And who are they? And really importantly, what do they know?
So I love personas. As a tool for exercising empathy. Because we are so used to, I have a requirement. I need to code it. Blah. We don't necessarily take the time to step back and go, okay, how does this fit in the grand scheme of somebody trying to accomplish a task on a site.
So think about an experienced site user. So they know the site in and out they have something they want to do they want to get there in as few clicks as possible how do they accomplish their goal?
That's going to be a completely different test than a brand-new visitor who is looking at queues in your site to tell them where to go to do the thing.
A content author, again, a completely different persona but we really want to maybe content authoring as painless a process as possible otherwise we wouldn't be working in this plan. Site administrator. And then you can also take on completely different personas not that are necessarily role based but if somebody is coming from another country, is your site very U.S. focused or France focused or Canada focused or wherever you are.
Are you making assumptions about -- and this again gets to accessibility. Are you making assumptions that somebody is able to use a mouse? Maybe think about using your site by tabbing through it to see what somebody with a visual or dexterity disability might experience.
Or in my case, you know, I have twins. There were times where I had a baby in an arm and I only had one hand to do my work with. And I got a lot of empathy and in this case sympathy and empathy for people who just didn't have as much dexterity to get around with a mouse.
So we're almost done. And have time for questions. Go me.
Okay. So there isn't a one-size-fits-all solution.
Okay?
Think early and often about your quality goals. And get the team on board.
And allow time for testing. Even if you've got some stuff automated, executing that, reviewing the results, and acting upon it also requires time.
And I will open it up to questions. I know I had one about how to convince management to hire a QA. But let's see if there are any others first.
>> DOUG DOBRZYNSKI: And feel free to post in the chat or you should be able to unmute yourself if you would like to do voice.
>> SADIA RODRIGUEZ: Thank you. All right while people are thinking then I'm going to address Doug's question. All right. So actually Doug why don't you restate the question you had? Because I don't want to speak for you.
>> DOUG DOBRZYNSKI: So our situation has been this year we rolled out three big new products. And the system has actually been around about eight years. And it depended upon a lot of existing functionality. And we have no QA or QC person. And we don't even really designate one on the project or allow the budget. So the question was, how do you convince upper management that they need to plan for testing and budget for it? Ideally budget for a person, especially if you have big projects rolling out.
>> SADIA RODRIGUEZ: So budget is exactly the right question to be asking about.
So how do you show people who hold the pursestrings that it's important to spend money on QA? Well, what I've found the most powerful is to show them what the cost is of not having QA. Right?
So it's helpful to gather data for a while. How many bugs are you dealing with? Are you doing emergency rollouts to fix things that would have been addressed if you had been able to test a little bit better?
And not just the time cost but also you know the losing face, the embarrassment of having a bug out there that you would really rather not see.
So I definitely think it's helpful to quantify, if you can, the cost of not being able to test. Right?
Some companies more than others like to see what is industry standard. And if you can get your hands on it -- and honestly I haven't looked for years in this if you can get your hands on your industry standard for how buggy their functionality is or how many customer complaints they see on Twitter about something that doesn't work. Those things can be really, really powerful for making the argument for, we really, really need either to hire a full-time person for QA. Or to allow time as we a lot money to projects based on the hours that DEVs are going to be working for things like bug fixes and bug identification. Does that help at all?
>> DOUG DOBRZYNSKI: Yes, that's very helpful.
>> SADIA RODRIGUEZ: Okay.
>> DOUG DOBRZYNSKI: And I can think of exact things to go with them now.
>> SADIA RODRIGUEZ: Fantastic. Yeah. And that worked for me. In my case it was, fine, we'll give you up as a tech lead if you want to do QA. And that was how we got the foot in the door. At the university where I worked. And interestingly when I left that team and moved to a different team, they ended hiring two full-time QA people to replace me even though I had been half-time QA and half-time BA as well so then -- I thought it was interesting that when I left they said oh it is interesting when we can't test and one of the other things is -- this is a little squishier. But you could also try to quantify the time you spend on code reviews. And on technical debt and maybe say, hey, if we had more time to think about quality in the first place we would have less technical debt going forward. Now that of course assumes that you have management who understand the consequence of technical debt. But I found that to be a good catchphrase and buzzword that management seems to react to.
No other questions? Okay.
Here is one. What is my favorite accessibility testing tool? Oh I could talk about this for hours. I personally for manual testing -- so real-time testing I guess I should say, I really like the WAVE plug-in for Firefox and Chrome.
(Background talking.)
>> SADIA RODRIGUEZ: And is the question how I persuade the product owner that accessibility matters? Or that quality in general matters?
So the question in the chat is, what is your favorite testing tool regarding WCAG? And how do you persuade the PM/program owner? So I'll wait on clarification on that second part about what I'm persuading for but I really, really, really like the WAVE tool. And I think as I'm talking to my fellow QAs, that's a really good foot in the door. Actually if we have time do you want me to show you how it works real quick?
All right. So I didn't mean to stop sharing. I meant to switch to a browser. That didn't work.
Okay. Sorry, guys. So let's go to a site that's nice and generic that people have probably gone to once or more in their lives and you'll see that I have this WAVE plug-in here.
So I load this page --
>> DOUG DOBRZYNSKI: We cannot see your browser, Sadia.
>> SADIA RODRIGUEZ: Oh, that's not good.
>> DOUG DOBRZYNSKI: It's the slide still.
>> SADIA RODRIGUEZ: I'm going to do a new share. Is that better? Can you see my browser yet.
>> DOUG DOBRZYNSKI: Yep, I can.
>> SADIA RODRIGUEZ: Fantastic. Sorry about that. And thanks, Doug.
All right. So I wait for the page to load. And then I just click this WAVE plug-in that I have. And you'll see that it gives me a summary of errors, alerts, structural elements, which are good because they are signposts to help me navigate the site so ARIA, somebody has been thinking about accessibility. Good features. And there's a contrast error now if I want to see details on each of these I can go in here there's at least two images with no Alt text. So here is one. And then I can actually inspect that -- see down here it has this code option. So I can click on code to inspect exactly which image it is. That is missing. Sorry I have to move my Zoom doohickey that's missing Alt text. So it's this slide story dark gif apparently they didn't use the Alt text there. So here is a spacer image that doesn't have Alt text. Here are form elements that do have labels. You can see as I click on them, it takes -- I can go see what the HTML looks like for the thing that's a problem. Here is something that's very low contrast. So it's this text doohickey thing. And it's very hard to see the contrast there.
So I love this tool. Because it will pull up all sorts of really obvious accessibility things that could be fixed very easily.
Now, what it doesn't do is provide the sorts of human judgment things that a manual tester will be able to find. So they could have had Alt text throughout this page. But the Alt text could very well say, this is a picture, this is another picture, this is another picture, and this is another picture and this is getting really annoying but somebody made me fill out the Alt text so I'm going to do it. Right? That's not helpful.
So I definitely recommend, especially if you're testing a live site where there are content authors actually inputting information that things that are authored are also looked at as far as QA.
But the nice thing about that is your content authors, depending on your team, can also help with that sort of quality assurance. Maybe they are in charge of accessibility testing.
So I love the WCAG tool. Axe is similar. It's more technical so for this audience it may actually be the preferred option.
So -- oh, and then one other that a lot of people don't seem to know about -- I feel like everyone knows axe but the WCAG color contrast -- hold on let me reload so I get rid of the WAVE feedback. No, I don't want you. Yes, you may stop it. Thank you.
All right. And then where is my axe? Sorry; I guess I don't have it installed. All right. But again, the Contrast Checker is really good at showing visual contrast which I find really, really helpful. Now, what this doesn't show you is when there's poor contrast in something like text that's over an image. But if there's a background and foreground, this is really, really great at checking the contrast and those things so it found the same error I'm assuming that something else found. And you can even see that's pretty much unreadable. So that's bad contrast.
So those are my two -- axe is similar. But it doesn't give the output in as user-friendly a terminology as WAVE does.
So if you would prefer to just go straight to the nitty-gritty of what do you need to do to fix the code axe is probably a good place to start if it's somebody who is more big picture I want to understand more about accessibility, WAVE is a good place to start.
And as far as how to persuade a -- the people that accessibility matters, that, again, is a whole presentation in its own right but I will try to get to it really quickly first of all people are getting sued, people are getting sued regularly for not being accessible to people with disabilities.
A lot of Government sites are including state universities as I worked at one, I know this, and that's how I got into accessibility, a lot of Government sites are required by law to meet a certain degree of accessibility to people with different disabilities.
So again, the bottom line thing is a good thing to bring up.
But also in this day and age you want to look like a good citizen, a corporate good citizen and putting some effort into accessibility is a good thing to do. It's good for the world, it shows empathy. I think it's world showing -- and I don't have the number in my head. But go Google the buying power of people with disabilities. And say, look, when you -- your site is not usable by people with disabilities, you're leaving this market share on the table because they can't use your site.
Also, I'm kind of a proponent of sometimes just cowboy testing. Because you can always log as a bug, hey, there's no way and thank goodness Drupal is outside of the box Drupal has Alt text for images out of the box, yeah, some CMSs don't. So there are sites out there where authors would like to add Alt text can't because of the tool so in that case I might just put in a bug that says Alt text for images needs to be authorable. Even if it doesn't get addressed, you've logged it as a concern.
Like at this point at Bounteous our Design Team is really, really great at reviewing their designs for things like color contrast. And consistent navigation and the like.
It is such a lightweight thing to do. To -- I'm trying to remember if it has skip nav functionality on this site. No, it doesn't.
But it's really lightweight to have skip navigation functionality. Where essentially this stuff that's up here U.S. world, this advertisement, somebody who is accessing a site through a keyboard doesn't necessarily want to have to tab through everything up here, thanks, Doug. To get to the content. And so having functionality where you press tab once and you can then click enter or return to go into the bulk of the page can be really, really valuable. And it's super lightweight.
So hopefully that was helpful. If there are any other questions, please do get in touch with me. My email address is right there. [email protected]. Don't forget the movie night tonight and I think we are regrouping in 15 minutes.
Sorry; I'm sure there's a way to get to the end of this faster than this. There we go. Don't forget Contribution Day and I really, really would love feedback -- feedback on the talk. All right. Bye, all.