When at first, 100% of the Scrum stories were about the development process of the application in the sense of coding. Now with the QA process introduced especially the Product Owner has to take into account that some of the tickets for the refinement will be for progression of the software process.
That is a big change in work orientation for some of the ICT fellows.
Also, when QA is introduced more formally than in many cases there was "always some dude that actually tested the stuff", but he was never formal a QA/software tester. This means that there are persons with a ton of knowledge about the application, but the knowledge is in their brains, in the head.
But that doesn't mean that the newly introduced QA people don't need that knowledge, so there has to be reserved time to transfer that knowledge.
That reserved time must be something that is very firmly and certain, it should not be something optional.
Also, when it gets complex it should not be the case "this dude" says... oh let me check this quickly for you, so the story can be dragged to the 'done' column. NO! There must be(e) cross-pollination of knowledge! Knowledge transfer to the new QA('s).
More items related to software testing should enter the Sprints.
Next step is to Scrum poker those software process related items.
Bring them on the table.
What I unfortunately have seen a lot is that a QA asks for a certain functionality in the software to help the software testers check quicker for a certain correct functioning of a module or functionality. But that request gets shot down by the Product Owner (or others) because 'other things are more important'.
Then what you see is some recurring patterns.
1) When finally that request is met (coded and pushed into test environment(s), it is many times especially the users of the information system that really like the new functionality. (not only the testers that asked for it)
2) The requests are not fulfilled, but the nagging about the throughput time of the functional/regression testing starts in heavy amounts to enter the podium.
All in all, hiring software testers and/or a QA Manager in the company and thinking everything will work out just fine automagically is not a good idea.
You really have to do this aware and thoughtful!
There are company's that are advanced with software testing, and there are company's that are starting with software testing.
There are company's that have a gigantic budget for hiring testers and there are company's that have almost no money at all to hire at least one tester.
That having said, let's take an average company that has 1 application and 5 developers.
They start with the first QA Manager in their team.
Well now, there has to be some kind of rules so the QA Manager starts writing a test policy (in Dutch: testbeleid).
As the new QA Manager you can't just say, "hey I wrote down some testpolicy rules but whatever, just put in production how you want it and what you want, it doesn't matter...." no that just don't work that way.
Just like driving a car in traffic, you will have to obey the rules. You've got to follow those rules.
So now what you need to understand is that those test policy rules you wrote in the document can be very strict and formal or a bit more loose and easy to life by.
It can be the case that in a certain company (everything is actually context dependent) the need for them is actually for very strict test policy rules. They are actually in need of a
Release report before a new version of the software product can be installed on the production environment.
But if developers were not used to that, then the gap between how they used to work and how they are going to (for their feeling) obey the rules might be very big.
They can get the feeling that they can't adjust to such big differences in such a short time.
And in some cases, the QA Manager will experience that they either can't do it or they will not do it. So that is a big challenge.
What you will notice as a QA Manager is that you just can't turn that knob, you'll have to be sensitive and maybe introduce a little more formality step-by-step.
Then the questions arises okay but there's so many to write in the test policy, with what do I start first?
It's important that you make clear to the developers okay guys you had a way of working where you decided how things did go, but since you want to do more with QA, now you got to understand that things are going to change. And it might be the case that you don't like it, but that's the way it is going to be.
Consider the following: a fictive country has roads, cars drive on those roads, now they need traffic rules, but wait 5 humans (coincidentally developers) in this country want to drive whenever they want and however they want, they say they are in no need for any traffic lights of whatsoever... hmm that would be a strange situation right?
Well the same goes up for introducing QA in a company, either you do it and you do it right, or you don't do it at all (with all the consequences included).
One of the biggest changes for the developer is the aspect of time management. Let's state that 8 hours of a working day they were able to spend on coding. Now suddenly there is a QA that starts asking all kinds of questions. But the developer wants to write code.
That's not very cooperative towards the QA Manager, you could say.
Well there's also a more formal way how to see this:
Let's cast our imaginations back to the example of the traffic rules in the fictive country. Suddenly one of the 5 developers gets stopped by a traffic cop.
Wouldn't it be a strange situation if the developer says to the cop: I've got an appointment with the barber, sorry, but I'm not going to answer your questions. That wouldn't be allowed right?
What you see many times in companies when QA is introduced is that people have to get used to the fact that suddenly there are people that start asking questions about their work. For some people, that can be such a big change that they can't even get used to that within a certain time span in which it is actually needed. (for example, the QA has to write that release report and the QA Manager doesn't have the status update of the module the developer is working on).
Another aspect in this is 'timing': so sometimes the moment where the QA needs the answers to his questions is exactly the moment where the developer is in deep focus with the logic for the code, yes that can be the case, still it can be the QA can't stay too informal with it. Sure he can postpone his question sometimes, but the developer has to get used to the fact that maybe 5 minutes of talking to him can get the QA going with the testing for hours, maybe even days in some cases.
Another aspect when introducing QA for the first time in a company is that many people don't understand how big the pallet of topics is that a software tester can come in contact with.
So what happens is they want that the QA starts regression testing, functional test, write automated tests, assist with the User Acceptance Testing, security testing, performance testing, database testing, it gets to the point that all those tasks can not be done by 1 person anymore. (and the QA is in need of even more rules in the test policy document f.e. a Test Risk Analysis session (scoping his work and planning) but the group hasn't even adjusted to the current test policy's yet (dilemma)).
Then budget aspects come into play as well. If a company has almost no budget for 1 QA (or what sometimes it happens they have budget, but they just don't want to hire another person (" even more traffic cops? pffff ") but the demands for the amount of work or test throughput time are completely unrealistic, then it can get to the point that a QA engineer gets frustrated.
What you see many times when QA is introduced in a company is that the hierarchy (who reports to whom?) stays long the same way, but not the way it should be.
?. Everybody understands that a traffic cop doesn't have to report what he does or why he starts asking questions to a driver of a car. That the car driver is older than the traffic cop is not an argument.
In companies, however, it happens many times that the new QA has to report to the lead developer (or other team member for that matter). Which in fact is completely nonsense, it doesn't make sense. Because he is longer in the company should be seen just the same as an invalid argument as "the car driver is older".
It gets even worse if the QA is taken accounted for when bugs show up in the production environment.
Uw banner ook hier?