QA Software Tester - Anyone do this?

I've been involved in some UAT testing.

It was aweful, it probaly could have been better if the other company had a system tester... (We was testing different feeds from different companies [JTK - Hudson Kapel - Toyota])

It was very fustrating.
 
The testers in our place have an easy ride and from what I can gather are highly paid too. In a nutshell, they derive a test plan and request this be signed off from the business operations team. They then conduct these tests. If it all completes ok, and there are production issues, they are covered because they've tested everything that was supposed to be tested.

If there are issues with the testing environment, they palm the issue to someone else (and hence get no grief for this eating into their schedule). If timescales are tight, they descope, again with agreement from operations.

If there are defects, they pass to the development team. Again, they get no grief.

In fact, its a pretty dossy job with very little accountability. Money for old rope. Could I do it? Boring as hell if you ask me unless you are doing something interesting like performance testing, but then again, only the first time through the cycles.
 
It depends on the working environment. If it's anything like where I'm working, you'll be sat with the development team and helping them to understand the requirements and acceptance criteria for the work, alongside helping the product owners and stakeholders (i.e. the people that want the software) come up with meaningful and concise requirements in the first place.
Are you actually a tester Jestar or do you just know what te testers have to do at your place? Are they still referred to testers in your place of work or do they have a different title? The reason I ask is that I want to get away from what I consider to be low level testing and move onto more high level work like you have mentioned, or perhaps onto the automation side of things.

The problem is I'll never be able to do this at my place, and job adverts rarely have enough detail to know what kind of role it will be. Obviously I can apply for the role and find out that way, but if there's certain job titles/buzzwords to look out for then this would be handy to know!
 
act as an intermediary between software users and the software developers.

You will work with the clients to understand the issues that they are having with their software system, identify ways in which it can be improved or developed to resolve these issues, communicate these developments back to the technical development team who do the coding in the software

To me this doesn't sound like a pure testing role, that stuff would in many organisations fall to a Business Analyst or similar role. It could be good experience BUT I'd be wary in terms of what salary they are offering i.e. taking on a lot of responsibility for not much pay. The fact they invite applications with no experience is quite worrying as a complete novice probably doesn't have the training/skills/experience to properly elicit and document such requirements.

I used to do software testing and can tell you that it can be quite varied depending on the organisation. Things I used to do included:

-Manual test scripting (basically devising tests based on specifications for new software)
-Automated test scripting (as above but more complex/specific to the automation tool in question)
-Test script execution (running the above)
-Exploratory testing (basically using the application and feeding back issues, trying to break it, more of a 'joined up' approach than simple script execution)
-QA of test scripts created by others (making sure they cover of all requirements etc)
-Writing test strategies (unlikely for a junior role with no industry experience but over time you might do this if you become expert in a certain area)
-Process improvement ideas i.e. suggesting how we could improve ways of conducting QA/testing
-Release management (i.e. scheduling new software releases, compiling release notes etc)
-Estimating (i.e. deducing how much effort you believe is required to create and execute tests for new software)
-Regression testing (essentially re-execution of baselined test scripts to ensure new changes haven't broken existing functionality)
-QA on release documentation
-Writing user guides

Plus a bunch of tasks that I wouldn't expect a junior level tester to be involved in (creation of job profiles, writing test strategies, line management of onshore/offshore testers etc)
 
Are you actually a tester Jestar or do you just know what te testers have to do at your place? Are they still referred to testers in your place of work or do they have a different title? The reason I ask is that I want to get away from what I consider to be low level testing and move onto more high level work like you have mentioned, or perhaps onto the automation side of things.

The problem is I'll never be able to do this at my place, and job adverts rarely have enough detail to know what kind of role it will be. Obviously I can apply for the role and find out that way, but if there's certain job titles/buzzwords to look out for then this would be handy to know!

Strictly speaking I'm a developer, but what I actually do to get the job done is not that different to what the QAs (which is what we refer to them as, instead of "testers") do. I also coach teams as well, so I have to know what the deal is anyway because coaching a technical team is literally a synonym for helping them to communicate with themselves and external teams/people - a very core part of what both the QAs and the devs do. Possibly more so the QAs, but not by much. Just to be clear, there are two QAs in every dev team.

We also have a huge team of regression testers, but again what they really become are people with thinking hats on that tell us what they want tested as part of a release and we, the developers, do what we can to help them remove any monotony from their job. I think it's important to note The ideal goal is to have all the boring stuff automated, and leave the intricate/fun stuff (basically wacky edge cases and exploratory testing) to be done manually by someone who has a good mind for thinking of quirky things and is able to accurately repeat the process. We're also in the transition from having them (the testers) in their own team, to integrating them within the multiple dev teams we have. It's more efficient to have them involved along the entire process than it is to have a "dev" phase, then a "test" phase.
 
Last edited:
Those with automated tests going on. Are they recorded UI action tests, or scripted via CodedUI tests?

Which software do you use for this, and would you recommended it for a C#/.net application that involves databases too. (SQL Server 2008 (express) R2)
 
SpecFlow for stuff that we want spoken in the domain's language. Nice Given/When/Then format that is literally English for the non-tech folk to see. Else we just use NUnit if it's only going to be techy folk.

We have a large number of recorded tests (Selenium IDE) but we hate them. Very fragile, near impossible to maintain and very difficult to make repeatable for many situations due to unique identifiers and the like (e.g. a recorded test to register a unique username is impossible without somehow cleaning the data before or similar - something the UI won't let you do (and shouldn't!)

The tests on our UI try to be just very shallow tests. e.g. Does this button trigger this action/method on the underlying model?

Then we can have better (easier to control and maintain) tests from the seam between UI and model to assert the behaviour.
 
Last edited:
SpecFlow for stuff that we want spoken in the domain's language. Nice Given/When/Then format that is literally English for the non-tech folk to see. Else we just use NUnit if it's only going to be techy folk.

We have a large number of recorded tests (Selenium IDE) but we hate them. Very fragile, near impossible to maintain and very difficult to make repeatable for many situations due to unique identifiers and the like (e.g. a recorded test to register a unique username is impossible without somehow cleaning the data before or similar - something the UI won't let you do (and shouldn't!)

The tests on our UI try to be just very shallow tests. e.g. Does this button trigger this action/method on the underlying model?

Then we can have better (easier to control and maintain) tests from the seam between UI and model to assert the behaviour.

As a Product Owner I love spec flow. I've been working with some devs who are trying to develop a methodology for mapping the domain as smoke, integration and unit test by either promoting Givens or Whens to move though the process or define it in more detail. It's a good process.

Selenium sucks, as you say because of it's fragility. It's had enough to keep track of what you are breaking never mind get round to fixing it all within the sprint.
 
Back
Top Bottom