UAT & BAT

Man of Honour
Joined
17 Oct 2002
Posts
29,254
Location
Ottakring, Vienna.
More of a rant than a career question I suppose, but does anyone here get involved in UAT or BAT for new software/software developments?

Maybe it's just my organisation, but I've been involved in a fair number of these tests over a few years with different areas of the business, and without doubt they are the most disorganised shambolic events I have ever attended.
Poorly prepared scripts, no indication of what or where you are looking for things, people wandering about coming and going with no apparent schedule or agenda, tech people unable to provide sufficient data to test the system with and so on.

Am I alone or does anyone else encounter similar frustrations?
 
Love to get into testing, such a logical field, although it may be very different than I think.

I've worked at a multinational gambling company and now at a national estate planning company developing in-house software.

Testing in both environments has been a joke. Literally, it's been a case of "here, try and break it!", no plans, scripts, thought into it at all.
 
Add OAT to that. Always an afterthought, normally initiated by the supporting team refusing to take it into live support... :D
 
I think it depends on the culture of the organisation, and the individuals involved.

UAT is, in my opinion, a good way of identifying talent within an organisation i.e. people with an aptitude for software testing and process improvement, who also have an inherent understand of (at least some aspects of) the business. You will often find people working a call centre or whatever who could do a lot worse than use UAT as a means of getting a foothold into IT or similar. As projects work with UAT testers they can determine who is genuinely interested, who is finding significant issues, who is providing good feedback and suggestions on new systems. I have seen several people over the years move from basic call centre / admin positions into Test/BA/PM roles. As mentioned however it does depend a bit on the culture of the organisation, and whether IT change projects are outsourced.

Probably one of the biggest 'issues' is a lack of clear acceptance criteria and up-front planning of what is required to signoff on change. Lack of sufficient data to test with can sometimes for example be caused by it not being identified soon enough - it should form part of the requirements process, not a case of "here you go test this software" "eh wtf I can't test this it doesn't contain any historical data".
 
As above it depends on the people involved. So many places and people just play lip services to these things. I've done a fair bit over the years, mainly from the development side. People with a great attention to detail are essential and willing to listen. With the right people its a very valuable process.

Without it, people just repeat all the classic mistakes in developing a new system.
 
I think it depends on the culture of the organisation, and the individuals involved.

Probably one of the biggest 'issues' is a lack of clear acceptance criteria and up-front planning of what is required to signoff on change. Lack of sufficient data to test with can sometimes for example be caused by it not being identified soon enough - it should form part of the requirements process, not a case of "here you go test this software" "eh wtf I can't test this it doesn't contain any historical data".

Agree with this completely. Usually all goes wrong at the start in the requirements stage.

To the OP - How does what you've described go down at your CAB/release session to agree deployment into production?
 
Back
Top Bottom