Monday, June 27, 2011

It ain't negativity if it's the truth

I think every software tester has been accused of "being negative" at least once in their career. Funnily enough, a reply of "that's because it's my job" doesn't go down too well.

Still, when your job is based around finding what's wrong with something before it gets into the gentle hands of your customers (who are usually even more able to find problems than the best testers, because... well, we do what we think a sensible person would do. We usually don't get a chance to do monkey testing and randomly click stuff because we're flat out trying to make sure that would should work, does), you're going to have a mindset that's focused on finding flaws. Possibly you'll even get nitpicky. Or - horrors! - negative.

When you look at testing in the broadest sense we've got several basic responsibilities when it comes to new software. We're effectively responsible for making sure that
  • it does what it's supposed to do
  • it doesn't do anything it shouldn't do
  • it doesn't break anything that used to work
  • if it does barf, it does so as gracefully as possible
  • and it doesn't gobble all the available CPU, take forever to do a simple lookup, or any of the thousands of usually undocumented extras that are taken for granted.
  • Oh, and it looks good and doesn't confuse anyone.
In my experience, the first and the third points generally get hit for the most obvious scenarios. Anything beyond that is bonus, because getting those two stable usually takes all the time that's available before the release goes out. Also in my experience, code delivery is often late (usually because the schedule wasn't exactly realistic in the first place), and release dates are an immovable object of Herculean proportions (usually because there are contractual obligations to have something in a customer's hands by a certain date for money to change hands - and if money doesn't change hands there are serious implications for little things like jobs).

Also in my experience, when a customer hits on the one scenario in several million that blows the software up, they blame the testers for not finding it. They don't know - or care - that we're supporting hundreds of other customers who all have completely different configurations and would never hit that problem. They just want their scenario to work.

This might not go down well with managerial types, but I don't consider it to be negative when it's true. Even my typically blunt form of honest isn't something I'd call negative - because I'm not claiming things are worse than they are.

Of course, to those who like to live their professional lives in cocoons of happy-happy sales talk, it's going to sound negative.

(True story - the worst year my employer ever had, every month without fail the sales manager kept on with the "but next month will be great and it will all turn around". Needless to say, it didn't - but whatever the heck drug the sales dudes are on, I wouldn't mind a stash for release days. It must be potent stuff).

Tuesday, June 21, 2011

Blood in the water

The workplace is getting || this close to the next major release of our software. So tempers are short, stress levels are rising, regression scripts are breaking, new emergency fixes are being released and... what?

Oh yes. In theory, it's not supposed to be this way. There's this mystical thing called a "lock date", after which nothing more than bug fixes to the release project are integrated. I'm not sure when this release's lock date was, but it flew by a while back. The actual lock date for our software is typically some time after no-one is using that version any more.

I only wish I was exaggerating... Today an emergency fix went out to a customer. The "emergency" consisted of one set of extra logging, and two minor feature requests. That's right. New features. There were four requests for emergency fixes came in during the day, most of them rather less than the life-and-death-the-regulators-will-have-you-for-lunch kind of things that emergency fixes are supposed to be for.

This release has at least eight new development projects in it as well as a massive number of bug fixes, most of them copies of someone's emergency two weeks ago. This tested by a team of five-soon-to-be-six over the course of maybe two months. Um. You think maybe there's a slight problem with scale here? Possibly even a little case of "bit off more than we could chew"?

The problem, really, is one that bedevils anything that makes software. No-one can estimate a software project until they've done enough work with it to have a good idea what they need to do. When the project is paid for by Customer X who has a drop-dead must-have-it date of Y, things get... interesting. Because usually Customer X's drop dead date is somewhere earlier than the time it would take to build and test the feature - and rather than have Customer X take their money elsewhere, the sales guys tell them we can do it, then let the project managers explain why the feature isn't going to be quite as advanced as they expected... Hilarity, as a rule, does NOT ensue.

And people wonder why testers have such a negative reputation. We testers are the ones stuck between that immovable release date and the stealth features, late development (usually because someone failed to account for little things like "the developer is human and needs to sleep at least a few hours each day"), unforeseen weirdness with third party interfaces, the world's weirdest regulations, and who knows what all else. (Whatever it is, I saw most of it today. Yes. I'm tired. Yes, I'll be really glad when this release goes out. Then I can take a deep breath and dive headfirst into the next one.)

More on the tester negativity thing later, when I'm not too fried to think.

Sunday, June 19, 2011

Managers or Leads?

Da Boss is pushing to have the testing group re-formed as a formal group all reporting to one person, and he doesn't want that person to be a lead. (It's complicated. We used to be a separate testing group with a lead, then we got split up and housed with the developers and reported to the development lead for our group - except that a lot of the things that need to be coordinated for us just didn't happen because no-one had that time allocated, and well... it got messy. So having been integrated with development and reporting to dev leads who haven't got a clue what to do with a tester, Da Boss now wants us to de-integrate but stay co-located.)

I'm all for the idea of the testing group as a separate concern, while staying co-located with the developers. We work closely with all the software development team, and conversation tends to happen a lot more easily when you're in the same room. What I'm not so sure about is the manager vs lead question.

Da Boss's view is that - understandably - he doesn't want to see a good tester lose half their time to administrative stuff. At the same time, you can't do that administrative stuff if you don't understand QA and testing. Then there's the psychological factor. Someone who's a "manager", however minor, is one of "them" and not in the trenches, where a lead is still one of "us". I've never seen anyone manage to get past that particular psychological factor. Just being a manager is enough to put a distance between you and the rest of the team - and a manager for a team of 5 soon to be 6 maybe 7 if we can fill that last vacancy seems a bit excessive. Plus the company's always been pretty flat in the admin structure, with only 2-3 levels of hierarchy.

All of which sets me wondering whether it's better to for teams to have leads who juggle administrative and technical duty, or managers who focus on the administrative. I don't know the answer - and I don't know if there is an answer.

Tuesday, June 14, 2011

Food for thought - what are we here for anyway?

So, I was surfing the testing blogs the other day and came across a post that got me thinking. Let's Destroy The World.

Now sure, destroying the world is kind of what we testers are supposed to do, at least if you ask some of the programmers who've had a sarcastic bug report too many, but seriously - it's our illusions as testers Marlene wants to destroy, because - guess what? - they're counter productive.

Go read the post, and see if you don't agree.

Testing isn't a career choice, and it isn't something people typically want to stay in. Testers are too often underpaid and treated as unskilled monkeys, and if they try to push for better process - you know, the things that prevent bugs instead of causing them - they find themselves having to look for another job all of a sudden.

The question is how to change this mindset.

It starts at home, with us. If we want to be treated like professionals, we first need to act like professionals. Yes, the testing mindset is to find the flaws in everything. It's part of the package. But how we communicate those flaws is up to us. If we treat developers as collaborators in the production of high quality software - software that does what people want it to do and doesn't hide a bunch of flaws, security holes, and doesn't need to be restarted every half-hour because it's done something illegal - we'll get a much better result. If we treat project managers the same way, we get better results. And if we lock ourselves in our silos and bitch about how buggy whatever gets thrown over the wall to us happens to be, we're never going to get better results.

In the end, it doesn't matter how you got there. The world doesn't care - all the world cares is that the results are there. The first step on that journey is finding common cause with the people we work with. Then they can be allies, advocates, and even Speakers To Management (a critically important position that should never, ever be neglected).

So what are software testers here for? Me, I'm in it because I like testing - I like finding bugs and tracking them down to their deepest, darkest roots. To me, that's puzzle solving, which I love. But the true appeal is this: I'm making things better for someone. In a small, significant way, what I do helps to make people's lives and jobs easier.

Maybe that is where we testers need to focus our PR - we make people's lives better. We save them frustration, anguish - and sometimes, we even save their lives.