Can we replace everything with automation or do we need manual testing?
up vote
15
down vote
favorite
I have seen arguments that we should automate all our tests, and I have seen arguments that manual testing is necessary.
I don't know which one to believe. Is it even possible to automate all tests? When people say that all tests should be automated, do they mean the kind of tests where manual testers work through a detailed test script or do they mean the kind of tests where manual testers explore the application?
How do I decide which approach is correct?
automated-testing manual-testing application-software-testing
New contributor
add a comment |
up vote
15
down vote
favorite
I have seen arguments that we should automate all our tests, and I have seen arguments that manual testing is necessary.
I don't know which one to believe. Is it even possible to automate all tests? When people say that all tests should be automated, do they mean the kind of tests where manual testers work through a detailed test script or do they mean the kind of tests where manual testers explore the application?
How do I decide which approach is correct?
automated-testing manual-testing application-software-testing
New contributor
2
I have expanded your question as it is attracting good answers. You can revert my changes if you think I misunderstood you.
– Kate Paulk♦
Nov 14 at 13:31
3
Possible duplicate of Can every test be done by automation?
– Alexey R.
Nov 14 at 22:52
1
Obligatory xkcd: This will tell you what you should/shouldn't automate.
– TemporalWolf
Nov 14 at 22:58
1
You have to manually test your automated processes.
– Simon Richter
Nov 15 at 15:02
I think this is somewhat subjective, similar to whether or not you think things like design/UX can be automated (for what it's worth, I don't think any of this can be effectively automated).
– ESR
Nov 16 at 5:00
add a comment |
up vote
15
down vote
favorite
up vote
15
down vote
favorite
I have seen arguments that we should automate all our tests, and I have seen arguments that manual testing is necessary.
I don't know which one to believe. Is it even possible to automate all tests? When people say that all tests should be automated, do they mean the kind of tests where manual testers work through a detailed test script or do they mean the kind of tests where manual testers explore the application?
How do I decide which approach is correct?
automated-testing manual-testing application-software-testing
New contributor
I have seen arguments that we should automate all our tests, and I have seen arguments that manual testing is necessary.
I don't know which one to believe. Is it even possible to automate all tests? When people say that all tests should be automated, do they mean the kind of tests where manual testers work through a detailed test script or do they mean the kind of tests where manual testers explore the application?
How do I decide which approach is correct?
automated-testing manual-testing application-software-testing
automated-testing manual-testing application-software-testing
New contributor
New contributor
edited Nov 14 at 13:30
Kate Paulk♦
23.8k63981
23.8k63981
New contributor
asked Nov 14 at 5:23
Pranali Mane
7614
7614
New contributor
New contributor
2
I have expanded your question as it is attracting good answers. You can revert my changes if you think I misunderstood you.
– Kate Paulk♦
Nov 14 at 13:31
3
Possible duplicate of Can every test be done by automation?
– Alexey R.
Nov 14 at 22:52
1
Obligatory xkcd: This will tell you what you should/shouldn't automate.
– TemporalWolf
Nov 14 at 22:58
1
You have to manually test your automated processes.
– Simon Richter
Nov 15 at 15:02
I think this is somewhat subjective, similar to whether or not you think things like design/UX can be automated (for what it's worth, I don't think any of this can be effectively automated).
– ESR
Nov 16 at 5:00
add a comment |
2
I have expanded your question as it is attracting good answers. You can revert my changes if you think I misunderstood you.
– Kate Paulk♦
Nov 14 at 13:31
3
Possible duplicate of Can every test be done by automation?
– Alexey R.
Nov 14 at 22:52
1
Obligatory xkcd: This will tell you what you should/shouldn't automate.
– TemporalWolf
Nov 14 at 22:58
1
You have to manually test your automated processes.
– Simon Richter
Nov 15 at 15:02
I think this is somewhat subjective, similar to whether or not you think things like design/UX can be automated (for what it's worth, I don't think any of this can be effectively automated).
– ESR
Nov 16 at 5:00
2
2
I have expanded your question as it is attracting good answers. You can revert my changes if you think I misunderstood you.
– Kate Paulk♦
Nov 14 at 13:31
I have expanded your question as it is attracting good answers. You can revert my changes if you think I misunderstood you.
– Kate Paulk♦
Nov 14 at 13:31
3
3
Possible duplicate of Can every test be done by automation?
– Alexey R.
Nov 14 at 22:52
Possible duplicate of Can every test be done by automation?
– Alexey R.
Nov 14 at 22:52
1
1
Obligatory xkcd: This will tell you what you should/shouldn't automate.
– TemporalWolf
Nov 14 at 22:58
Obligatory xkcd: This will tell you what you should/shouldn't automate.
– TemporalWolf
Nov 14 at 22:58
1
1
You have to manually test your automated processes.
– Simon Richter
Nov 15 at 15:02
You have to manually test your automated processes.
– Simon Richter
Nov 15 at 15:02
I think this is somewhat subjective, similar to whether or not you think things like design/UX can be automated (for what it's worth, I don't think any of this can be effectively automated).
– ESR
Nov 16 at 5:00
I think this is somewhat subjective, similar to whether or not you think things like design/UX can be automated (for what it's worth, I don't think any of this can be effectively automated).
– ESR
Nov 16 at 5:00
add a comment |
10 Answers
10
active
oldest
votes
up vote
23
down vote
IMHO, Anything which is monotonous & repeatable in testing can & should
be automated.
Having said that,
manual testing is irreplaceable and should be utilized for creative
exploratory testing which is purely driven by tester's experience and
intuition
by using 'What if' questions to dig deeper beyond obvious test scenarios which takes skills and creativity.
2
In order to determine what is monotonous and repeatable, you have to first do manual testing to know that. Fringe cases can become monotonous and repeatable once you do enough of it and can intuitively create the correct process to reflect the fringe cases, then it simply moves the automation further and you go manually test until you can repeat the same thing.
– Nelson
Nov 15 at 5:53
add a comment |
up vote
10
down vote
The answer is "it depends".
Let's say testing is divided into two main categories; Functional and Exploratory:
Functional
Functional means "Prove that something works as per defined requirements", generally by following a test script:
- Click Button A.
- Enter this text into Textbox 2: "foo".
- Click button B. The screen should then turn bright pink.
Exploratory
Exploratory means "Try to break this", generally by the tester's own creativity and ingenuity. e.g.
- Click Button A
- Now paste in 10,000 Emoji characters into Textbox2
- Click button C, not B.
- Does that cause something interesting and unexpected to happen, etc.
In general
Generally speaking, you should aim to automate the first kind of tests - normally after first performing them manually so you know they pass.
But you can, with the right kind of software development flow, sometimes develop them before the code is even written. However in practice this is unfortunately rare.
However
One thing to be aware of is that sometimes it is hard to automate some functional tests.
- Web and Desktop UI applications are pretty easy, since there is a well-defined model to work against and many tools to help automate these (Selenium, etc).
- However writing automated tests for a service app with no UI and with a very poor API can be hard.
- Tests that require specialized hardware or licenses for the thing being tested can be hard as well because you just don't have non-production resources to test against.
- So you will find that the ability to automate these things varies.
Notes on exploratory testing
The second kind (Exploratory) generally cannot be automated, because they rely on human intuition. However, there are exceptions - for example, data entry forms can be automated via "fuzz" tests which will try lots and lots of combinations of inputs to see if they can find some combination that causes a problem.
So when people say that all tests should be automated, yes they generally mean the Functional tests and not the Exploratory tests.
Exploratory as an input to Functional automation
Another thing to consider is that Exploratory testing can often be an input into automated Functional testing; so for example, once you've found an edge case using manual testing, creating an automated regression test for that case can provide great value. In this was you've turned an unknown case into a defined, functional case. It's also a good time to talk to the Stakeholders and Developers to figure out if that new behaviour you've just discovered should be kept :)
How do I decide which approach is correct?
The answer is again, it depends on things like
- How much time you have to write automation
- How much writing that automation will cost
- How much value it will provide.
There is no point in manually performing simple functional tests thousands of times on a critical part of the application if a single day of coding can give you full automation on it (e.g a Login dialog box); on the other hand there is no point spending a month developing automation for a feature that is not important (like say a non-critical feature for an applications' "About" dialog box)
Hope that helps.
New contributor
1
I would like to add that for exploratory testing. If you find an issue, convert it to a functional test case so a developer can fix the issue, and prevent it from occurring again and that the tester does not have to test it again.
– Viktor Mellgren
Nov 15 at 9:12
@ViktorMellgren - yes indeed, a very good point! I should probably update the answer to indicate that exploratory testing is an input into the automation of functional testing.
– Stephen Byrne
Nov 15 at 11:18
Libraries like python's hypothesis is a form of automated exploratory testing - so it can be automated, at least to a degree. But it doesn't really replace manual exploratory testing.
– Shadow
Nov 16 at 2:00
add a comment |
up vote
3
down vote
This is a pretty straight forward question. I think everyone will agree to this:
Is manual testing necessary ? --> A must
Can we replace everything with automation ? --> Mostly NO. When it comes to automation testing an application in a project, there are a lot of factors that is considered (e.g. timeline, feasibility, ROI, maintainability, future plans). In my experience, you have to be wise in deciding the extent of automation that you are planing in the project.
This answer would be more valuable if you fleshed out why manual testing is a must. "He said, she said" isn't encouraged on stack exchange.
– Shadow
Nov 16 at 2:01
add a comment |
up vote
2
down vote
In summary, computers can only do test execution, and only a subset of it. Since testing encompasses more than execution, the answer is: No.
For more details and other factors, see my blog post on it:
http://thatsabug.com/automation/testing/2018/11/08/why_automation_will_not_save_you.html
add a comment |
up vote
1
down vote
Manual Testing is the main purpose of testing it self, it's definitely necessary.
You can replace everything with automation, if you're working on a product that will be use by no one (which is I know doesn't exist).
Testing is BETTER with Automation, but full Automation?, NO.
I think the ratio will be different on each person and each project, for me it's 70% manual vs 30% automation.
Like you did right now,feedback, insight, perspective, wild idea, etc.
And that's I believe something Automation can't give.
Even for a full automatic factory, they still involve human as safety fuse, right?
add a comment |
up vote
1
down vote
Not everything can be replaced by Automation testing and nor everything can be covered by manual testing, speaking int context of testing , they are in proportion.
Automation testing help cover parts of manual testing which is repeated and can be used for stable builds with no major updates frequently.
So, Yes.Manual testing is necessary.
add a comment |
up vote
1
down vote
We can't make Manual Testing as Zero but we could minimize it.
Things which are critical and repeatable must be automated.There has to be a starategy in place for converting Manual tasks to automated.
New contributor
add a comment |
up vote
0
down vote
One thing you can not test automatically is user-interface acceptance.
Maybe you can automate things like "does the OK button exist" and "does the correct thing happen if you generate a click-even on the OK button". But you can't catch bugs like the OK button being rendered in a size of 1x1 pixels, positioned outside of the viewable area, behind a different button, with the label "OK" in the wrong language, upside down and written in white font on white background.
A human tester would notice immediately that there is something wrong. But an automated test suit would need to be really advanced to detect these errors. And if you build such an advanced test suit, it would generate a ton of false positives. A human user might not even notice that the button is a pixel smaller than it used to be, as along as they can still find it. But an automatic test suit can't differ between notable and unnotable differences.
So no matter how much you automate your testing, you should always have a human tester in your deployment process as a final sanity check.
New contributor
add a comment |
up vote
0
down vote
The key factor is not technical ability, but cost
Other answers have given good insights into the kinds of tests that can be readily automated, and those that can be performed better (on a technical/quality level) using Manual QA. However, I feel one key thing that is often missed is that the real decision is never "can we automate these tests", but "is it more cost effective to automate these tests".
When developing a test automation strategy, the goal is exactly the same as a manual testing strategy - to increase the product quality.
Similar to moving QA away from developers, to dedicated QA staff. Automation is not a move that directly increases quality by itself. Instead, it is simply a different way to achieve the same quality increase - which has different costs associated with it.]
Importantly, given enough time and resources - automation can be used to perform all testing on a project. However, the cost associated with most of this testing is far higher than hiring manual QA to test the same areas.
Note, that cost here does not exclusively refer to monetary cost; but also the time required and how that impacts the release schedule of your product
As such, in any situation when considering what "can" and "cannot" be automated - the real question that needs asked is; in what areas would using automation be more cost effective than using manual QA.
Key Considerations
When determining which areas may be appropriate for automation, some key criteria may be:
- Is your product a single release, or a long-term service?
While development is ongoing, and features are being changed - automation will continue to require development to meet the changing requirements. In a single-release product, such as video games, the time spent developing most automation may never pay off; as testing finishes soon after development finishes. Manual testing has the advantage of flexibility - where humans can pick up any build and continue to check it. In a long-term project with only minor changes - automation costs can be recouped by running for years with only minimal maintainence, and will likely become cheaper than the equivilent manual testing.
- Do you have any simple functionalities involving large amount of data?
In areas which are simple to test, but involve large amount of varied data - automation development costs may be low, with large payoffs. For example, testing that every one of 1000 configuration files loads without error - may be simple to develop as an automated test, but would have taken manual QA multiple days to check through. Likewise, localisation testing involves checking the same functionality for each language - if the automation can run in one language, it's likely to take no extra effort to run it in all others.
- Do you have functionality where failures cause large knock-ons?
*For some products, there may be areas in which a failure will cause large knock-ons to future development and testing. For example, if the product takes 2 hours for manual QA to download but is untestable if it crashes on launch - automating this simple check may provide value in the reduction of lost-time for manual QA (every build your automation catches early, saves 2hours x number of testers, hours of wages).
- Do you have legal requirements that need met?
For some products, the cost of not testing needs to be taken into account. While automation may be more expensive in the average case; it is sometimes necessary to compare its cost to the worst case. That is, if a manual test failed to catch a bug, which later makes your company liable to be sued - the cost of automation may be justified by the reduced risk, despite being more expensive than almost-equivilent manual checks.
Summary
There is no solid rule for what should and should not be automated. Each company pays different amounts for their manual QA, and for their automation developers - what makes financial sense in one company may not make sense in another, even with identical products.
As a final rule of thumb:
Automation can be considered to be an investment, which needs to be run repeatedly to pay-off against manual QA. That is, Automation scales starts expensive but scales well.
Manual QA is a flat-fee, which often starts cheaper than automation - but continues to be cost throughout the project. That is, manual QA starts cheap - but scales badly.
1
This isn't a decision that should ever be made at the level of a product, it's a decision that gets made at a level of a given set of tasks for a particular test. You may consider re-wording your answer to help readers understand that making this decision at the level of a whole product is probably a bad idea.
– Iron Gremlin
Nov 16 at 1:56
add a comment |
up vote
0
down vote
Once upon a time, I read and later wrote test plans which specified that certain tests must be done manually before each release.
The reasoning was that we had seen automated tests show all green even if the system was broken. When that happened we usually tried to fix the automated tests, but we accepted that no affordable test automation would give us the ease of mind of having a person say "I tested it on the stage system and it looks good" or "I logged in and it works, take the node into the load balancer again."
For instance, I have seen Selenium test suites where one test would log in, click the way to the profile page, and verify that the profile page opens, and another test would log in to create a session, navigate directly to the profile page, and then test it. Guess what? There was a new profile page with a different click path, the first test got changed, but the developers did not remove the old profile page or the second test. So the Selenium tests did not represent the customer journey any more. Yet they were all green.
There are other tests which should be automated if it is possible. Unit tests or API tests, big tests with a mind-numbing number of subtle variants. But automated tests merely prove that all assertions are green, which is necessary but not sufficient for a release.
add a comment |
10 Answers
10
active
oldest
votes
10 Answers
10
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
23
down vote
IMHO, Anything which is monotonous & repeatable in testing can & should
be automated.
Having said that,
manual testing is irreplaceable and should be utilized for creative
exploratory testing which is purely driven by tester's experience and
intuition
by using 'What if' questions to dig deeper beyond obvious test scenarios which takes skills and creativity.
2
In order to determine what is monotonous and repeatable, you have to first do manual testing to know that. Fringe cases can become monotonous and repeatable once you do enough of it and can intuitively create the correct process to reflect the fringe cases, then it simply moves the automation further and you go manually test until you can repeat the same thing.
– Nelson
Nov 15 at 5:53
add a comment |
up vote
23
down vote
IMHO, Anything which is monotonous & repeatable in testing can & should
be automated.
Having said that,
manual testing is irreplaceable and should be utilized for creative
exploratory testing which is purely driven by tester's experience and
intuition
by using 'What if' questions to dig deeper beyond obvious test scenarios which takes skills and creativity.
2
In order to determine what is monotonous and repeatable, you have to first do manual testing to know that. Fringe cases can become monotonous and repeatable once you do enough of it and can intuitively create the correct process to reflect the fringe cases, then it simply moves the automation further and you go manually test until you can repeat the same thing.
– Nelson
Nov 15 at 5:53
add a comment |
up vote
23
down vote
up vote
23
down vote
IMHO, Anything which is monotonous & repeatable in testing can & should
be automated.
Having said that,
manual testing is irreplaceable and should be utilized for creative
exploratory testing which is purely driven by tester's experience and
intuition
by using 'What if' questions to dig deeper beyond obvious test scenarios which takes skills and creativity.
IMHO, Anything which is monotonous & repeatable in testing can & should
be automated.
Having said that,
manual testing is irreplaceable and should be utilized for creative
exploratory testing which is purely driven by tester's experience and
intuition
by using 'What if' questions to dig deeper beyond obvious test scenarios which takes skills and creativity.
edited Nov 15 at 0:45
answered Nov 14 at 11:00
V.A.
2,6101725
2,6101725
2
In order to determine what is monotonous and repeatable, you have to first do manual testing to know that. Fringe cases can become monotonous and repeatable once you do enough of it and can intuitively create the correct process to reflect the fringe cases, then it simply moves the automation further and you go manually test until you can repeat the same thing.
– Nelson
Nov 15 at 5:53
add a comment |
2
In order to determine what is monotonous and repeatable, you have to first do manual testing to know that. Fringe cases can become monotonous and repeatable once you do enough of it and can intuitively create the correct process to reflect the fringe cases, then it simply moves the automation further and you go manually test until you can repeat the same thing.
– Nelson
Nov 15 at 5:53
2
2
In order to determine what is monotonous and repeatable, you have to first do manual testing to know that. Fringe cases can become monotonous and repeatable once you do enough of it and can intuitively create the correct process to reflect the fringe cases, then it simply moves the automation further and you go manually test until you can repeat the same thing.
– Nelson
Nov 15 at 5:53
In order to determine what is monotonous and repeatable, you have to first do manual testing to know that. Fringe cases can become monotonous and repeatable once you do enough of it and can intuitively create the correct process to reflect the fringe cases, then it simply moves the automation further and you go manually test until you can repeat the same thing.
– Nelson
Nov 15 at 5:53
add a comment |
up vote
10
down vote
The answer is "it depends".
Let's say testing is divided into two main categories; Functional and Exploratory:
Functional
Functional means "Prove that something works as per defined requirements", generally by following a test script:
- Click Button A.
- Enter this text into Textbox 2: "foo".
- Click button B. The screen should then turn bright pink.
Exploratory
Exploratory means "Try to break this", generally by the tester's own creativity and ingenuity. e.g.
- Click Button A
- Now paste in 10,000 Emoji characters into Textbox2
- Click button C, not B.
- Does that cause something interesting and unexpected to happen, etc.
In general
Generally speaking, you should aim to automate the first kind of tests - normally after first performing them manually so you know they pass.
But you can, with the right kind of software development flow, sometimes develop them before the code is even written. However in practice this is unfortunately rare.
However
One thing to be aware of is that sometimes it is hard to automate some functional tests.
- Web and Desktop UI applications are pretty easy, since there is a well-defined model to work against and many tools to help automate these (Selenium, etc).
- However writing automated tests for a service app with no UI and with a very poor API can be hard.
- Tests that require specialized hardware or licenses for the thing being tested can be hard as well because you just don't have non-production resources to test against.
- So you will find that the ability to automate these things varies.
Notes on exploratory testing
The second kind (Exploratory) generally cannot be automated, because they rely on human intuition. However, there are exceptions - for example, data entry forms can be automated via "fuzz" tests which will try lots and lots of combinations of inputs to see if they can find some combination that causes a problem.
So when people say that all tests should be automated, yes they generally mean the Functional tests and not the Exploratory tests.
Exploratory as an input to Functional automation
Another thing to consider is that Exploratory testing can often be an input into automated Functional testing; so for example, once you've found an edge case using manual testing, creating an automated regression test for that case can provide great value. In this was you've turned an unknown case into a defined, functional case. It's also a good time to talk to the Stakeholders and Developers to figure out if that new behaviour you've just discovered should be kept :)
How do I decide which approach is correct?
The answer is again, it depends on things like
- How much time you have to write automation
- How much writing that automation will cost
- How much value it will provide.
There is no point in manually performing simple functional tests thousands of times on a critical part of the application if a single day of coding can give you full automation on it (e.g a Login dialog box); on the other hand there is no point spending a month developing automation for a feature that is not important (like say a non-critical feature for an applications' "About" dialog box)
Hope that helps.
New contributor
1
I would like to add that for exploratory testing. If you find an issue, convert it to a functional test case so a developer can fix the issue, and prevent it from occurring again and that the tester does not have to test it again.
– Viktor Mellgren
Nov 15 at 9:12
@ViktorMellgren - yes indeed, a very good point! I should probably update the answer to indicate that exploratory testing is an input into the automation of functional testing.
– Stephen Byrne
Nov 15 at 11:18
Libraries like python's hypothesis is a form of automated exploratory testing - so it can be automated, at least to a degree. But it doesn't really replace manual exploratory testing.
– Shadow
Nov 16 at 2:00
add a comment |
up vote
10
down vote
The answer is "it depends".
Let's say testing is divided into two main categories; Functional and Exploratory:
Functional
Functional means "Prove that something works as per defined requirements", generally by following a test script:
- Click Button A.
- Enter this text into Textbox 2: "foo".
- Click button B. The screen should then turn bright pink.
Exploratory
Exploratory means "Try to break this", generally by the tester's own creativity and ingenuity. e.g.
- Click Button A
- Now paste in 10,000 Emoji characters into Textbox2
- Click button C, not B.
- Does that cause something interesting and unexpected to happen, etc.
In general
Generally speaking, you should aim to automate the first kind of tests - normally after first performing them manually so you know they pass.
But you can, with the right kind of software development flow, sometimes develop them before the code is even written. However in practice this is unfortunately rare.
However
One thing to be aware of is that sometimes it is hard to automate some functional tests.
- Web and Desktop UI applications are pretty easy, since there is a well-defined model to work against and many tools to help automate these (Selenium, etc).
- However writing automated tests for a service app with no UI and with a very poor API can be hard.
- Tests that require specialized hardware or licenses for the thing being tested can be hard as well because you just don't have non-production resources to test against.
- So you will find that the ability to automate these things varies.
Notes on exploratory testing
The second kind (Exploratory) generally cannot be automated, because they rely on human intuition. However, there are exceptions - for example, data entry forms can be automated via "fuzz" tests which will try lots and lots of combinations of inputs to see if they can find some combination that causes a problem.
So when people say that all tests should be automated, yes they generally mean the Functional tests and not the Exploratory tests.
Exploratory as an input to Functional automation
Another thing to consider is that Exploratory testing can often be an input into automated Functional testing; so for example, once you've found an edge case using manual testing, creating an automated regression test for that case can provide great value. In this was you've turned an unknown case into a defined, functional case. It's also a good time to talk to the Stakeholders and Developers to figure out if that new behaviour you've just discovered should be kept :)
How do I decide which approach is correct?
The answer is again, it depends on things like
- How much time you have to write automation
- How much writing that automation will cost
- How much value it will provide.
There is no point in manually performing simple functional tests thousands of times on a critical part of the application if a single day of coding can give you full automation on it (e.g a Login dialog box); on the other hand there is no point spending a month developing automation for a feature that is not important (like say a non-critical feature for an applications' "About" dialog box)
Hope that helps.
New contributor
1
I would like to add that for exploratory testing. If you find an issue, convert it to a functional test case so a developer can fix the issue, and prevent it from occurring again and that the tester does not have to test it again.
– Viktor Mellgren
Nov 15 at 9:12
@ViktorMellgren - yes indeed, a very good point! I should probably update the answer to indicate that exploratory testing is an input into the automation of functional testing.
– Stephen Byrne
Nov 15 at 11:18
Libraries like python's hypothesis is a form of automated exploratory testing - so it can be automated, at least to a degree. But it doesn't really replace manual exploratory testing.
– Shadow
Nov 16 at 2:00
add a comment |
up vote
10
down vote
up vote
10
down vote
The answer is "it depends".
Let's say testing is divided into two main categories; Functional and Exploratory:
Functional
Functional means "Prove that something works as per defined requirements", generally by following a test script:
- Click Button A.
- Enter this text into Textbox 2: "foo".
- Click button B. The screen should then turn bright pink.
Exploratory
Exploratory means "Try to break this", generally by the tester's own creativity and ingenuity. e.g.
- Click Button A
- Now paste in 10,000 Emoji characters into Textbox2
- Click button C, not B.
- Does that cause something interesting and unexpected to happen, etc.
In general
Generally speaking, you should aim to automate the first kind of tests - normally after first performing them manually so you know they pass.
But you can, with the right kind of software development flow, sometimes develop them before the code is even written. However in practice this is unfortunately rare.
However
One thing to be aware of is that sometimes it is hard to automate some functional tests.
- Web and Desktop UI applications are pretty easy, since there is a well-defined model to work against and many tools to help automate these (Selenium, etc).
- However writing automated tests for a service app with no UI and with a very poor API can be hard.
- Tests that require specialized hardware or licenses for the thing being tested can be hard as well because you just don't have non-production resources to test against.
- So you will find that the ability to automate these things varies.
Notes on exploratory testing
The second kind (Exploratory) generally cannot be automated, because they rely on human intuition. However, there are exceptions - for example, data entry forms can be automated via "fuzz" tests which will try lots and lots of combinations of inputs to see if they can find some combination that causes a problem.
So when people say that all tests should be automated, yes they generally mean the Functional tests and not the Exploratory tests.
Exploratory as an input to Functional automation
Another thing to consider is that Exploratory testing can often be an input into automated Functional testing; so for example, once you've found an edge case using manual testing, creating an automated regression test for that case can provide great value. In this was you've turned an unknown case into a defined, functional case. It's also a good time to talk to the Stakeholders and Developers to figure out if that new behaviour you've just discovered should be kept :)
How do I decide which approach is correct?
The answer is again, it depends on things like
- How much time you have to write automation
- How much writing that automation will cost
- How much value it will provide.
There is no point in manually performing simple functional tests thousands of times on a critical part of the application if a single day of coding can give you full automation on it (e.g a Login dialog box); on the other hand there is no point spending a month developing automation for a feature that is not important (like say a non-critical feature for an applications' "About" dialog box)
Hope that helps.
New contributor
The answer is "it depends".
Let's say testing is divided into two main categories; Functional and Exploratory:
Functional
Functional means "Prove that something works as per defined requirements", generally by following a test script:
- Click Button A.
- Enter this text into Textbox 2: "foo".
- Click button B. The screen should then turn bright pink.
Exploratory
Exploratory means "Try to break this", generally by the tester's own creativity and ingenuity. e.g.
- Click Button A
- Now paste in 10,000 Emoji characters into Textbox2
- Click button C, not B.
- Does that cause something interesting and unexpected to happen, etc.
In general
Generally speaking, you should aim to automate the first kind of tests - normally after first performing them manually so you know they pass.
But you can, with the right kind of software development flow, sometimes develop them before the code is even written. However in practice this is unfortunately rare.
However
One thing to be aware of is that sometimes it is hard to automate some functional tests.
- Web and Desktop UI applications are pretty easy, since there is a well-defined model to work against and many tools to help automate these (Selenium, etc).
- However writing automated tests for a service app with no UI and with a very poor API can be hard.
- Tests that require specialized hardware or licenses for the thing being tested can be hard as well because you just don't have non-production resources to test against.
- So you will find that the ability to automate these things varies.
Notes on exploratory testing
The second kind (Exploratory) generally cannot be automated, because they rely on human intuition. However, there are exceptions - for example, data entry forms can be automated via "fuzz" tests which will try lots and lots of combinations of inputs to see if they can find some combination that causes a problem.
So when people say that all tests should be automated, yes they generally mean the Functional tests and not the Exploratory tests.
Exploratory as an input to Functional automation
Another thing to consider is that Exploratory testing can often be an input into automated Functional testing; so for example, once you've found an edge case using manual testing, creating an automated regression test for that case can provide great value. In this was you've turned an unknown case into a defined, functional case. It's also a good time to talk to the Stakeholders and Developers to figure out if that new behaviour you've just discovered should be kept :)
How do I decide which approach is correct?
The answer is again, it depends on things like
- How much time you have to write automation
- How much writing that automation will cost
- How much value it will provide.
There is no point in manually performing simple functional tests thousands of times on a critical part of the application if a single day of coding can give you full automation on it (e.g a Login dialog box); on the other hand there is no point spending a month developing automation for a feature that is not important (like say a non-critical feature for an applications' "About" dialog box)
Hope that helps.
New contributor
edited Nov 15 at 11:22
New contributor
answered Nov 14 at 17:08
Stephen Byrne
2015
2015
New contributor
New contributor
1
I would like to add that for exploratory testing. If you find an issue, convert it to a functional test case so a developer can fix the issue, and prevent it from occurring again and that the tester does not have to test it again.
– Viktor Mellgren
Nov 15 at 9:12
@ViktorMellgren - yes indeed, a very good point! I should probably update the answer to indicate that exploratory testing is an input into the automation of functional testing.
– Stephen Byrne
Nov 15 at 11:18
Libraries like python's hypothesis is a form of automated exploratory testing - so it can be automated, at least to a degree. But it doesn't really replace manual exploratory testing.
– Shadow
Nov 16 at 2:00
add a comment |
1
I would like to add that for exploratory testing. If you find an issue, convert it to a functional test case so a developer can fix the issue, and prevent it from occurring again and that the tester does not have to test it again.
– Viktor Mellgren
Nov 15 at 9:12
@ViktorMellgren - yes indeed, a very good point! I should probably update the answer to indicate that exploratory testing is an input into the automation of functional testing.
– Stephen Byrne
Nov 15 at 11:18
Libraries like python's hypothesis is a form of automated exploratory testing - so it can be automated, at least to a degree. But it doesn't really replace manual exploratory testing.
– Shadow
Nov 16 at 2:00
1
1
I would like to add that for exploratory testing. If you find an issue, convert it to a functional test case so a developer can fix the issue, and prevent it from occurring again and that the tester does not have to test it again.
– Viktor Mellgren
Nov 15 at 9:12
I would like to add that for exploratory testing. If you find an issue, convert it to a functional test case so a developer can fix the issue, and prevent it from occurring again and that the tester does not have to test it again.
– Viktor Mellgren
Nov 15 at 9:12
@ViktorMellgren - yes indeed, a very good point! I should probably update the answer to indicate that exploratory testing is an input into the automation of functional testing.
– Stephen Byrne
Nov 15 at 11:18
@ViktorMellgren - yes indeed, a very good point! I should probably update the answer to indicate that exploratory testing is an input into the automation of functional testing.
– Stephen Byrne
Nov 15 at 11:18
Libraries like python's hypothesis is a form of automated exploratory testing - so it can be automated, at least to a degree. But it doesn't really replace manual exploratory testing.
– Shadow
Nov 16 at 2:00
Libraries like python's hypothesis is a form of automated exploratory testing - so it can be automated, at least to a degree. But it doesn't really replace manual exploratory testing.
– Shadow
Nov 16 at 2:00
add a comment |
up vote
3
down vote
This is a pretty straight forward question. I think everyone will agree to this:
Is manual testing necessary ? --> A must
Can we replace everything with automation ? --> Mostly NO. When it comes to automation testing an application in a project, there are a lot of factors that is considered (e.g. timeline, feasibility, ROI, maintainability, future plans). In my experience, you have to be wise in deciding the extent of automation that you are planing in the project.
This answer would be more valuable if you fleshed out why manual testing is a must. "He said, she said" isn't encouraged on stack exchange.
– Shadow
Nov 16 at 2:01
add a comment |
up vote
3
down vote
This is a pretty straight forward question. I think everyone will agree to this:
Is manual testing necessary ? --> A must
Can we replace everything with automation ? --> Mostly NO. When it comes to automation testing an application in a project, there are a lot of factors that is considered (e.g. timeline, feasibility, ROI, maintainability, future plans). In my experience, you have to be wise in deciding the extent of automation that you are planing in the project.
This answer would be more valuable if you fleshed out why manual testing is a must. "He said, she said" isn't encouraged on stack exchange.
– Shadow
Nov 16 at 2:01
add a comment |
up vote
3
down vote
up vote
3
down vote
This is a pretty straight forward question. I think everyone will agree to this:
Is manual testing necessary ? --> A must
Can we replace everything with automation ? --> Mostly NO. When it comes to automation testing an application in a project, there are a lot of factors that is considered (e.g. timeline, feasibility, ROI, maintainability, future plans). In my experience, you have to be wise in deciding the extent of automation that you are planing in the project.
This is a pretty straight forward question. I think everyone will agree to this:
Is manual testing necessary ? --> A must
Can we replace everything with automation ? --> Mostly NO. When it comes to automation testing an application in a project, there are a lot of factors that is considered (e.g. timeline, feasibility, ROI, maintainability, future plans). In my experience, you have to be wise in deciding the extent of automation that you are planing in the project.
answered Nov 14 at 6:08
Kshetra Mohan Prusty
532312
532312
This answer would be more valuable if you fleshed out why manual testing is a must. "He said, she said" isn't encouraged on stack exchange.
– Shadow
Nov 16 at 2:01
add a comment |
This answer would be more valuable if you fleshed out why manual testing is a must. "He said, she said" isn't encouraged on stack exchange.
– Shadow
Nov 16 at 2:01
This answer would be more valuable if you fleshed out why manual testing is a must. "He said, she said" isn't encouraged on stack exchange.
– Shadow
Nov 16 at 2:01
This answer would be more valuable if you fleshed out why manual testing is a must. "He said, she said" isn't encouraged on stack exchange.
– Shadow
Nov 16 at 2:01
add a comment |
up vote
2
down vote
In summary, computers can only do test execution, and only a subset of it. Since testing encompasses more than execution, the answer is: No.
For more details and other factors, see my blog post on it:
http://thatsabug.com/automation/testing/2018/11/08/why_automation_will_not_save_you.html
add a comment |
up vote
2
down vote
In summary, computers can only do test execution, and only a subset of it. Since testing encompasses more than execution, the answer is: No.
For more details and other factors, see my blog post on it:
http://thatsabug.com/automation/testing/2018/11/08/why_automation_will_not_save_you.html
add a comment |
up vote
2
down vote
up vote
2
down vote
In summary, computers can only do test execution, and only a subset of it. Since testing encompasses more than execution, the answer is: No.
For more details and other factors, see my blog post on it:
http://thatsabug.com/automation/testing/2018/11/08/why_automation_will_not_save_you.html
In summary, computers can only do test execution, and only a subset of it. Since testing encompasses more than execution, the answer is: No.
For more details and other factors, see my blog post on it:
http://thatsabug.com/automation/testing/2018/11/08/why_automation_will_not_save_you.html
answered Nov 14 at 10:27
João Farias
2,006315
2,006315
add a comment |
add a comment |
up vote
1
down vote
Manual Testing is the main purpose of testing it self, it's definitely necessary.
You can replace everything with automation, if you're working on a product that will be use by no one (which is I know doesn't exist).
Testing is BETTER with Automation, but full Automation?, NO.
I think the ratio will be different on each person and each project, for me it's 70% manual vs 30% automation.
Like you did right now,feedback, insight, perspective, wild idea, etc.
And that's I believe something Automation can't give.
Even for a full automatic factory, they still involve human as safety fuse, right?
add a comment |
up vote
1
down vote
Manual Testing is the main purpose of testing it self, it's definitely necessary.
You can replace everything with automation, if you're working on a product that will be use by no one (which is I know doesn't exist).
Testing is BETTER with Automation, but full Automation?, NO.
I think the ratio will be different on each person and each project, for me it's 70% manual vs 30% automation.
Like you did right now,feedback, insight, perspective, wild idea, etc.
And that's I believe something Automation can't give.
Even for a full automatic factory, they still involve human as safety fuse, right?
add a comment |
up vote
1
down vote
up vote
1
down vote
Manual Testing is the main purpose of testing it self, it's definitely necessary.
You can replace everything with automation, if you're working on a product that will be use by no one (which is I know doesn't exist).
Testing is BETTER with Automation, but full Automation?, NO.
I think the ratio will be different on each person and each project, for me it's 70% manual vs 30% automation.
Like you did right now,feedback, insight, perspective, wild idea, etc.
And that's I believe something Automation can't give.
Even for a full automatic factory, they still involve human as safety fuse, right?
Manual Testing is the main purpose of testing it self, it's definitely necessary.
You can replace everything with automation, if you're working on a product that will be use by no one (which is I know doesn't exist).
Testing is BETTER with Automation, but full Automation?, NO.
I think the ratio will be different on each person and each project, for me it's 70% manual vs 30% automation.
Like you did right now,feedback, insight, perspective, wild idea, etc.
And that's I believe something Automation can't give.
Even for a full automatic factory, they still involve human as safety fuse, right?
answered Nov 14 at 7:27
BetaTester
979
979
add a comment |
add a comment |
up vote
1
down vote
Not everything can be replaced by Automation testing and nor everything can be covered by manual testing, speaking int context of testing , they are in proportion.
Automation testing help cover parts of manual testing which is repeated and can be used for stable builds with no major updates frequently.
So, Yes.Manual testing is necessary.
add a comment |
up vote
1
down vote
Not everything can be replaced by Automation testing and nor everything can be covered by manual testing, speaking int context of testing , they are in proportion.
Automation testing help cover parts of manual testing which is repeated and can be used for stable builds with no major updates frequently.
So, Yes.Manual testing is necessary.
add a comment |
up vote
1
down vote
up vote
1
down vote
Not everything can be replaced by Automation testing and nor everything can be covered by manual testing, speaking int context of testing , they are in proportion.
Automation testing help cover parts of manual testing which is repeated and can be used for stable builds with no major updates frequently.
So, Yes.Manual testing is necessary.
Not everything can be replaced by Automation testing and nor everything can be covered by manual testing, speaking int context of testing , they are in proportion.
Automation testing help cover parts of manual testing which is repeated and can be used for stable builds with no major updates frequently.
So, Yes.Manual testing is necessary.
answered Nov 14 at 9:09
Prasad_Joshi
271210
271210
add a comment |
add a comment |
up vote
1
down vote
We can't make Manual Testing as Zero but we could minimize it.
Things which are critical and repeatable must be automated.There has to be a starategy in place for converting Manual tasks to automated.
New contributor
add a comment |
up vote
1
down vote
We can't make Manual Testing as Zero but we could minimize it.
Things which are critical and repeatable must be automated.There has to be a starategy in place for converting Manual tasks to automated.
New contributor
add a comment |
up vote
1
down vote
up vote
1
down vote
We can't make Manual Testing as Zero but we could minimize it.
Things which are critical and repeatable must be automated.There has to be a starategy in place for converting Manual tasks to automated.
New contributor
We can't make Manual Testing as Zero but we could minimize it.
Things which are critical and repeatable must be automated.There has to be a starategy in place for converting Manual tasks to automated.
New contributor
New contributor
answered Nov 14 at 17:15
user35633
111
111
New contributor
New contributor
add a comment |
add a comment |
up vote
0
down vote
One thing you can not test automatically is user-interface acceptance.
Maybe you can automate things like "does the OK button exist" and "does the correct thing happen if you generate a click-even on the OK button". But you can't catch bugs like the OK button being rendered in a size of 1x1 pixels, positioned outside of the viewable area, behind a different button, with the label "OK" in the wrong language, upside down and written in white font on white background.
A human tester would notice immediately that there is something wrong. But an automated test suit would need to be really advanced to detect these errors. And if you build such an advanced test suit, it would generate a ton of false positives. A human user might not even notice that the button is a pixel smaller than it used to be, as along as they can still find it. But an automatic test suit can't differ between notable and unnotable differences.
So no matter how much you automate your testing, you should always have a human tester in your deployment process as a final sanity check.
New contributor
add a comment |
up vote
0
down vote
One thing you can not test automatically is user-interface acceptance.
Maybe you can automate things like "does the OK button exist" and "does the correct thing happen if you generate a click-even on the OK button". But you can't catch bugs like the OK button being rendered in a size of 1x1 pixels, positioned outside of the viewable area, behind a different button, with the label "OK" in the wrong language, upside down and written in white font on white background.
A human tester would notice immediately that there is something wrong. But an automated test suit would need to be really advanced to detect these errors. And if you build such an advanced test suit, it would generate a ton of false positives. A human user might not even notice that the button is a pixel smaller than it used to be, as along as they can still find it. But an automatic test suit can't differ between notable and unnotable differences.
So no matter how much you automate your testing, you should always have a human tester in your deployment process as a final sanity check.
New contributor
add a comment |
up vote
0
down vote
up vote
0
down vote
One thing you can not test automatically is user-interface acceptance.
Maybe you can automate things like "does the OK button exist" and "does the correct thing happen if you generate a click-even on the OK button". But you can't catch bugs like the OK button being rendered in a size of 1x1 pixels, positioned outside of the viewable area, behind a different button, with the label "OK" in the wrong language, upside down and written in white font on white background.
A human tester would notice immediately that there is something wrong. But an automated test suit would need to be really advanced to detect these errors. And if you build such an advanced test suit, it would generate a ton of false positives. A human user might not even notice that the button is a pixel smaller than it used to be, as along as they can still find it. But an automatic test suit can't differ between notable and unnotable differences.
So no matter how much you automate your testing, you should always have a human tester in your deployment process as a final sanity check.
New contributor
One thing you can not test automatically is user-interface acceptance.
Maybe you can automate things like "does the OK button exist" and "does the correct thing happen if you generate a click-even on the OK button". But you can't catch bugs like the OK button being rendered in a size of 1x1 pixels, positioned outside of the viewable area, behind a different button, with the label "OK" in the wrong language, upside down and written in white font on white background.
A human tester would notice immediately that there is something wrong. But an automated test suit would need to be really advanced to detect these errors. And if you build such an advanced test suit, it would generate a ton of false positives. A human user might not even notice that the button is a pixel smaller than it used to be, as along as they can still find it. But an automatic test suit can't differ between notable and unnotable differences.
So no matter how much you automate your testing, you should always have a human tester in your deployment process as a final sanity check.
New contributor
edited Nov 15 at 12:45
New contributor
answered Nov 15 at 12:40
Philipp
1014
1014
New contributor
New contributor
add a comment |
add a comment |
up vote
0
down vote
The key factor is not technical ability, but cost
Other answers have given good insights into the kinds of tests that can be readily automated, and those that can be performed better (on a technical/quality level) using Manual QA. However, I feel one key thing that is often missed is that the real decision is never "can we automate these tests", but "is it more cost effective to automate these tests".
When developing a test automation strategy, the goal is exactly the same as a manual testing strategy - to increase the product quality.
Similar to moving QA away from developers, to dedicated QA staff. Automation is not a move that directly increases quality by itself. Instead, it is simply a different way to achieve the same quality increase - which has different costs associated with it.]
Importantly, given enough time and resources - automation can be used to perform all testing on a project. However, the cost associated with most of this testing is far higher than hiring manual QA to test the same areas.
Note, that cost here does not exclusively refer to monetary cost; but also the time required and how that impacts the release schedule of your product
As such, in any situation when considering what "can" and "cannot" be automated - the real question that needs asked is; in what areas would using automation be more cost effective than using manual QA.
Key Considerations
When determining which areas may be appropriate for automation, some key criteria may be:
- Is your product a single release, or a long-term service?
While development is ongoing, and features are being changed - automation will continue to require development to meet the changing requirements. In a single-release product, such as video games, the time spent developing most automation may never pay off; as testing finishes soon after development finishes. Manual testing has the advantage of flexibility - where humans can pick up any build and continue to check it. In a long-term project with only minor changes - automation costs can be recouped by running for years with only minimal maintainence, and will likely become cheaper than the equivilent manual testing.
- Do you have any simple functionalities involving large amount of data?
In areas which are simple to test, but involve large amount of varied data - automation development costs may be low, with large payoffs. For example, testing that every one of 1000 configuration files loads without error - may be simple to develop as an automated test, but would have taken manual QA multiple days to check through. Likewise, localisation testing involves checking the same functionality for each language - if the automation can run in one language, it's likely to take no extra effort to run it in all others.
- Do you have functionality where failures cause large knock-ons?
*For some products, there may be areas in which a failure will cause large knock-ons to future development and testing. For example, if the product takes 2 hours for manual QA to download but is untestable if it crashes on launch - automating this simple check may provide value in the reduction of lost-time for manual QA (every build your automation catches early, saves 2hours x number of testers, hours of wages).
- Do you have legal requirements that need met?
For some products, the cost of not testing needs to be taken into account. While automation may be more expensive in the average case; it is sometimes necessary to compare its cost to the worst case. That is, if a manual test failed to catch a bug, which later makes your company liable to be sued - the cost of automation may be justified by the reduced risk, despite being more expensive than almost-equivilent manual checks.
Summary
There is no solid rule for what should and should not be automated. Each company pays different amounts for their manual QA, and for their automation developers - what makes financial sense in one company may not make sense in another, even with identical products.
As a final rule of thumb:
Automation can be considered to be an investment, which needs to be run repeatedly to pay-off against manual QA. That is, Automation scales starts expensive but scales well.
Manual QA is a flat-fee, which often starts cheaper than automation - but continues to be cost throughout the project. That is, manual QA starts cheap - but scales badly.
1
This isn't a decision that should ever be made at the level of a product, it's a decision that gets made at a level of a given set of tasks for a particular test. You may consider re-wording your answer to help readers understand that making this decision at the level of a whole product is probably a bad idea.
– Iron Gremlin
Nov 16 at 1:56
add a comment |
up vote
0
down vote
The key factor is not technical ability, but cost
Other answers have given good insights into the kinds of tests that can be readily automated, and those that can be performed better (on a technical/quality level) using Manual QA. However, I feel one key thing that is often missed is that the real decision is never "can we automate these tests", but "is it more cost effective to automate these tests".
When developing a test automation strategy, the goal is exactly the same as a manual testing strategy - to increase the product quality.
Similar to moving QA away from developers, to dedicated QA staff. Automation is not a move that directly increases quality by itself. Instead, it is simply a different way to achieve the same quality increase - which has different costs associated with it.]
Importantly, given enough time and resources - automation can be used to perform all testing on a project. However, the cost associated with most of this testing is far higher than hiring manual QA to test the same areas.
Note, that cost here does not exclusively refer to monetary cost; but also the time required and how that impacts the release schedule of your product
As such, in any situation when considering what "can" and "cannot" be automated - the real question that needs asked is; in what areas would using automation be more cost effective than using manual QA.
Key Considerations
When determining which areas may be appropriate for automation, some key criteria may be:
- Is your product a single release, or a long-term service?
While development is ongoing, and features are being changed - automation will continue to require development to meet the changing requirements. In a single-release product, such as video games, the time spent developing most automation may never pay off; as testing finishes soon after development finishes. Manual testing has the advantage of flexibility - where humans can pick up any build and continue to check it. In a long-term project with only minor changes - automation costs can be recouped by running for years with only minimal maintainence, and will likely become cheaper than the equivilent manual testing.
- Do you have any simple functionalities involving large amount of data?
In areas which are simple to test, but involve large amount of varied data - automation development costs may be low, with large payoffs. For example, testing that every one of 1000 configuration files loads without error - may be simple to develop as an automated test, but would have taken manual QA multiple days to check through. Likewise, localisation testing involves checking the same functionality for each language - if the automation can run in one language, it's likely to take no extra effort to run it in all others.
- Do you have functionality where failures cause large knock-ons?
*For some products, there may be areas in which a failure will cause large knock-ons to future development and testing. For example, if the product takes 2 hours for manual QA to download but is untestable if it crashes on launch - automating this simple check may provide value in the reduction of lost-time for manual QA (every build your automation catches early, saves 2hours x number of testers, hours of wages).
- Do you have legal requirements that need met?
For some products, the cost of not testing needs to be taken into account. While automation may be more expensive in the average case; it is sometimes necessary to compare its cost to the worst case. That is, if a manual test failed to catch a bug, which later makes your company liable to be sued - the cost of automation may be justified by the reduced risk, despite being more expensive than almost-equivilent manual checks.
Summary
There is no solid rule for what should and should not be automated. Each company pays different amounts for their manual QA, and for their automation developers - what makes financial sense in one company may not make sense in another, even with identical products.
As a final rule of thumb:
Automation can be considered to be an investment, which needs to be run repeatedly to pay-off against manual QA. That is, Automation scales starts expensive but scales well.
Manual QA is a flat-fee, which often starts cheaper than automation - but continues to be cost throughout the project. That is, manual QA starts cheap - but scales badly.
1
This isn't a decision that should ever be made at the level of a product, it's a decision that gets made at a level of a given set of tasks for a particular test. You may consider re-wording your answer to help readers understand that making this decision at the level of a whole product is probably a bad idea.
– Iron Gremlin
Nov 16 at 1:56
add a comment |
up vote
0
down vote
up vote
0
down vote
The key factor is not technical ability, but cost
Other answers have given good insights into the kinds of tests that can be readily automated, and those that can be performed better (on a technical/quality level) using Manual QA. However, I feel one key thing that is often missed is that the real decision is never "can we automate these tests", but "is it more cost effective to automate these tests".
When developing a test automation strategy, the goal is exactly the same as a manual testing strategy - to increase the product quality.
Similar to moving QA away from developers, to dedicated QA staff. Automation is not a move that directly increases quality by itself. Instead, it is simply a different way to achieve the same quality increase - which has different costs associated with it.]
Importantly, given enough time and resources - automation can be used to perform all testing on a project. However, the cost associated with most of this testing is far higher than hiring manual QA to test the same areas.
Note, that cost here does not exclusively refer to monetary cost; but also the time required and how that impacts the release schedule of your product
As such, in any situation when considering what "can" and "cannot" be automated - the real question that needs asked is; in what areas would using automation be more cost effective than using manual QA.
Key Considerations
When determining which areas may be appropriate for automation, some key criteria may be:
- Is your product a single release, or a long-term service?
While development is ongoing, and features are being changed - automation will continue to require development to meet the changing requirements. In a single-release product, such as video games, the time spent developing most automation may never pay off; as testing finishes soon after development finishes. Manual testing has the advantage of flexibility - where humans can pick up any build and continue to check it. In a long-term project with only minor changes - automation costs can be recouped by running for years with only minimal maintainence, and will likely become cheaper than the equivilent manual testing.
- Do you have any simple functionalities involving large amount of data?
In areas which are simple to test, but involve large amount of varied data - automation development costs may be low, with large payoffs. For example, testing that every one of 1000 configuration files loads without error - may be simple to develop as an automated test, but would have taken manual QA multiple days to check through. Likewise, localisation testing involves checking the same functionality for each language - if the automation can run in one language, it's likely to take no extra effort to run it in all others.
- Do you have functionality where failures cause large knock-ons?
*For some products, there may be areas in which a failure will cause large knock-ons to future development and testing. For example, if the product takes 2 hours for manual QA to download but is untestable if it crashes on launch - automating this simple check may provide value in the reduction of lost-time for manual QA (every build your automation catches early, saves 2hours x number of testers, hours of wages).
- Do you have legal requirements that need met?
For some products, the cost of not testing needs to be taken into account. While automation may be more expensive in the average case; it is sometimes necessary to compare its cost to the worst case. That is, if a manual test failed to catch a bug, which later makes your company liable to be sued - the cost of automation may be justified by the reduced risk, despite being more expensive than almost-equivilent manual checks.
Summary
There is no solid rule for what should and should not be automated. Each company pays different amounts for their manual QA, and for their automation developers - what makes financial sense in one company may not make sense in another, even with identical products.
As a final rule of thumb:
Automation can be considered to be an investment, which needs to be run repeatedly to pay-off against manual QA. That is, Automation scales starts expensive but scales well.
Manual QA is a flat-fee, which often starts cheaper than automation - but continues to be cost throughout the project. That is, manual QA starts cheap - but scales badly.
The key factor is not technical ability, but cost
Other answers have given good insights into the kinds of tests that can be readily automated, and those that can be performed better (on a technical/quality level) using Manual QA. However, I feel one key thing that is often missed is that the real decision is never "can we automate these tests", but "is it more cost effective to automate these tests".
When developing a test automation strategy, the goal is exactly the same as a manual testing strategy - to increase the product quality.
Similar to moving QA away from developers, to dedicated QA staff. Automation is not a move that directly increases quality by itself. Instead, it is simply a different way to achieve the same quality increase - which has different costs associated with it.]
Importantly, given enough time and resources - automation can be used to perform all testing on a project. However, the cost associated with most of this testing is far higher than hiring manual QA to test the same areas.
Note, that cost here does not exclusively refer to monetary cost; but also the time required and how that impacts the release schedule of your product
As such, in any situation when considering what "can" and "cannot" be automated - the real question that needs asked is; in what areas would using automation be more cost effective than using manual QA.
Key Considerations
When determining which areas may be appropriate for automation, some key criteria may be:
- Is your product a single release, or a long-term service?
While development is ongoing, and features are being changed - automation will continue to require development to meet the changing requirements. In a single-release product, such as video games, the time spent developing most automation may never pay off; as testing finishes soon after development finishes. Manual testing has the advantage of flexibility - where humans can pick up any build and continue to check it. In a long-term project with only minor changes - automation costs can be recouped by running for years with only minimal maintainence, and will likely become cheaper than the equivilent manual testing.
- Do you have any simple functionalities involving large amount of data?
In areas which are simple to test, but involve large amount of varied data - automation development costs may be low, with large payoffs. For example, testing that every one of 1000 configuration files loads without error - may be simple to develop as an automated test, but would have taken manual QA multiple days to check through. Likewise, localisation testing involves checking the same functionality for each language - if the automation can run in one language, it's likely to take no extra effort to run it in all others.
- Do you have functionality where failures cause large knock-ons?
*For some products, there may be areas in which a failure will cause large knock-ons to future development and testing. For example, if the product takes 2 hours for manual QA to download but is untestable if it crashes on launch - automating this simple check may provide value in the reduction of lost-time for manual QA (every build your automation catches early, saves 2hours x number of testers, hours of wages).
- Do you have legal requirements that need met?
For some products, the cost of not testing needs to be taken into account. While automation may be more expensive in the average case; it is sometimes necessary to compare its cost to the worst case. That is, if a manual test failed to catch a bug, which later makes your company liable to be sued - the cost of automation may be justified by the reduced risk, despite being more expensive than almost-equivilent manual checks.
Summary
There is no solid rule for what should and should not be automated. Each company pays different amounts for their manual QA, and for their automation developers - what makes financial sense in one company may not make sense in another, even with identical products.
As a final rule of thumb:
Automation can be considered to be an investment, which needs to be run repeatedly to pay-off against manual QA. That is, Automation scales starts expensive but scales well.
Manual QA is a flat-fee, which often starts cheaper than automation - but continues to be cost throughout the project. That is, manual QA starts cheap - but scales badly.
answered Nov 15 at 13:20
Bilkokuya
1842
1842
1
This isn't a decision that should ever be made at the level of a product, it's a decision that gets made at a level of a given set of tasks for a particular test. You may consider re-wording your answer to help readers understand that making this decision at the level of a whole product is probably a bad idea.
– Iron Gremlin
Nov 16 at 1:56
add a comment |
1
This isn't a decision that should ever be made at the level of a product, it's a decision that gets made at a level of a given set of tasks for a particular test. You may consider re-wording your answer to help readers understand that making this decision at the level of a whole product is probably a bad idea.
– Iron Gremlin
Nov 16 at 1:56
1
1
This isn't a decision that should ever be made at the level of a product, it's a decision that gets made at a level of a given set of tasks for a particular test. You may consider re-wording your answer to help readers understand that making this decision at the level of a whole product is probably a bad idea.
– Iron Gremlin
Nov 16 at 1:56
This isn't a decision that should ever be made at the level of a product, it's a decision that gets made at a level of a given set of tasks for a particular test. You may consider re-wording your answer to help readers understand that making this decision at the level of a whole product is probably a bad idea.
– Iron Gremlin
Nov 16 at 1:56
add a comment |
up vote
0
down vote
Once upon a time, I read and later wrote test plans which specified that certain tests must be done manually before each release.
The reasoning was that we had seen automated tests show all green even if the system was broken. When that happened we usually tried to fix the automated tests, but we accepted that no affordable test automation would give us the ease of mind of having a person say "I tested it on the stage system and it looks good" or "I logged in and it works, take the node into the load balancer again."
For instance, I have seen Selenium test suites where one test would log in, click the way to the profile page, and verify that the profile page opens, and another test would log in to create a session, navigate directly to the profile page, and then test it. Guess what? There was a new profile page with a different click path, the first test got changed, but the developers did not remove the old profile page or the second test. So the Selenium tests did not represent the customer journey any more. Yet they were all green.
There are other tests which should be automated if it is possible. Unit tests or API tests, big tests with a mind-numbing number of subtle variants. But automated tests merely prove that all assertions are green, which is necessary but not sufficient for a release.
add a comment |
up vote
0
down vote
Once upon a time, I read and later wrote test plans which specified that certain tests must be done manually before each release.
The reasoning was that we had seen automated tests show all green even if the system was broken. When that happened we usually tried to fix the automated tests, but we accepted that no affordable test automation would give us the ease of mind of having a person say "I tested it on the stage system and it looks good" or "I logged in and it works, take the node into the load balancer again."
For instance, I have seen Selenium test suites where one test would log in, click the way to the profile page, and verify that the profile page opens, and another test would log in to create a session, navigate directly to the profile page, and then test it. Guess what? There was a new profile page with a different click path, the first test got changed, but the developers did not remove the old profile page or the second test. So the Selenium tests did not represent the customer journey any more. Yet they were all green.
There are other tests which should be automated if it is possible. Unit tests or API tests, big tests with a mind-numbing number of subtle variants. But automated tests merely prove that all assertions are green, which is necessary but not sufficient for a release.
add a comment |
up vote
0
down vote
up vote
0
down vote
Once upon a time, I read and later wrote test plans which specified that certain tests must be done manually before each release.
The reasoning was that we had seen automated tests show all green even if the system was broken. When that happened we usually tried to fix the automated tests, but we accepted that no affordable test automation would give us the ease of mind of having a person say "I tested it on the stage system and it looks good" or "I logged in and it works, take the node into the load balancer again."
For instance, I have seen Selenium test suites where one test would log in, click the way to the profile page, and verify that the profile page opens, and another test would log in to create a session, navigate directly to the profile page, and then test it. Guess what? There was a new profile page with a different click path, the first test got changed, but the developers did not remove the old profile page or the second test. So the Selenium tests did not represent the customer journey any more. Yet they were all green.
There are other tests which should be automated if it is possible. Unit tests or API tests, big tests with a mind-numbing number of subtle variants. But automated tests merely prove that all assertions are green, which is necessary but not sufficient for a release.
Once upon a time, I read and later wrote test plans which specified that certain tests must be done manually before each release.
The reasoning was that we had seen automated tests show all green even if the system was broken. When that happened we usually tried to fix the automated tests, but we accepted that no affordable test automation would give us the ease of mind of having a person say "I tested it on the stage system and it looks good" or "I logged in and it works, take the node into the load balancer again."
For instance, I have seen Selenium test suites where one test would log in, click the way to the profile page, and verify that the profile page opens, and another test would log in to create a session, navigate directly to the profile page, and then test it. Guess what? There was a new profile page with a different click path, the first test got changed, but the developers did not remove the old profile page or the second test. So the Selenium tests did not represent the customer journey any more. Yet they were all green.
There are other tests which should be automated if it is possible. Unit tests or API tests, big tests with a mind-numbing number of subtle variants. But automated tests merely prove that all assertions are green, which is necessary but not sufficient for a release.
answered 2 days ago
o.m.
25614
25614
add a comment |
add a comment |
Pranali Mane is a new contributor. Be nice, and check out our Code of Conduct.
Pranali Mane is a new contributor. Be nice, and check out our Code of Conduct.
Pranali Mane is a new contributor. Be nice, and check out our Code of Conduct.
Pranali Mane is a new contributor. Be nice, and check out our Code of Conduct.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsqa.stackexchange.com%2fquestions%2f36404%2fcan-we-replace-everything-with-automation-or-do-we-need-manual-testing%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
2
I have expanded your question as it is attracting good answers. You can revert my changes if you think I misunderstood you.
– Kate Paulk♦
Nov 14 at 13:31
3
Possible duplicate of Can every test be done by automation?
– Alexey R.
Nov 14 at 22:52
1
Obligatory xkcd: This will tell you what you should/shouldn't automate.
– TemporalWolf
Nov 14 at 22:58
1
You have to manually test your automated processes.
– Simon Richter
Nov 15 at 15:02
I think this is somewhat subjective, similar to whether or not you think things like design/UX can be automated (for what it's worth, I don't think any of this can be effectively automated).
– ESR
Nov 16 at 5:00