Is it a good idea to write all possible test cases after transform the team to TDD?











up vote
1
down vote

favorite












Assume we have a large enterprise-level application without any unit/functional tests. There was no test-driven development process during the development due to very tight deadlines (I know we should never promise any tight deadlines when we are not sure, but what's done is done!)



Now that all deadlines passed and things are calm, everyone agreed to transform us into a productive TDD/BDD based team...Yay!



Now the question is about the code that we already have: (1) Is it still okay or good idea to stop most of the development and start writing whole possible test cases from the beginning, even though everything is working completely OKAY (yet!)? Or (2) it's better to wait for something bad happen and then during the fix write new unit tests, or (3) even forget about previous codes and just write unit tests for the new codes only and postpone everything to the next major refactor.



There are a few good and related articles such as this one. I'm still not sure if it worth to invest on this considering we have a very limited time and many other projects/works are waiting for us.



Note: This question is explaining/imagining a totally awkward situation in a development team. This is not about me or any of my collegues, its just an imaginary situation. You may think this should never happen or the development manager is responsible for such a mess! But anyway, what's done is done. If possible, please do not downvote just because you think this should never happen.










share|improve this question
























  • Possible duplicate of Do I need unit test if I already have integration test?
    – gnat
    11 hours ago










  • You should probably be preparing for the next time deadlines arrive and you're not allowed to do TDD any more. Possibly by telling whoever drove the last round of technical debtvelopment why that wasn't a great idea.
    – jonrsharpe
    11 hours ago










  • @gnat I think it is not a duplicate question. The mentioned team don't have any kind of tests (not even integration tests)
    – Michel Gokan
    11 hours ago










  • @gnat the questions is: what will happen to our new unit tests? They might seem incomplete, or even worthless without writing all unit tests for the previously written codes. The question that you mention does not cover this specific concern.
    – Michel Gokan
    10 hours ago

















up vote
1
down vote

favorite












Assume we have a large enterprise-level application without any unit/functional tests. There was no test-driven development process during the development due to very tight deadlines (I know we should never promise any tight deadlines when we are not sure, but what's done is done!)



Now that all deadlines passed and things are calm, everyone agreed to transform us into a productive TDD/BDD based team...Yay!



Now the question is about the code that we already have: (1) Is it still okay or good idea to stop most of the development and start writing whole possible test cases from the beginning, even though everything is working completely OKAY (yet!)? Or (2) it's better to wait for something bad happen and then during the fix write new unit tests, or (3) even forget about previous codes and just write unit tests for the new codes only and postpone everything to the next major refactor.



There are a few good and related articles such as this one. I'm still not sure if it worth to invest on this considering we have a very limited time and many other projects/works are waiting for us.



Note: This question is explaining/imagining a totally awkward situation in a development team. This is not about me or any of my collegues, its just an imaginary situation. You may think this should never happen or the development manager is responsible for such a mess! But anyway, what's done is done. If possible, please do not downvote just because you think this should never happen.










share|improve this question
























  • Possible duplicate of Do I need unit test if I already have integration test?
    – gnat
    11 hours ago










  • You should probably be preparing for the next time deadlines arrive and you're not allowed to do TDD any more. Possibly by telling whoever drove the last round of technical debtvelopment why that wasn't a great idea.
    – jonrsharpe
    11 hours ago










  • @gnat I think it is not a duplicate question. The mentioned team don't have any kind of tests (not even integration tests)
    – Michel Gokan
    11 hours ago










  • @gnat the questions is: what will happen to our new unit tests? They might seem incomplete, or even worthless without writing all unit tests for the previously written codes. The question that you mention does not cover this specific concern.
    – Michel Gokan
    10 hours ago















up vote
1
down vote

favorite









up vote
1
down vote

favorite











Assume we have a large enterprise-level application without any unit/functional tests. There was no test-driven development process during the development due to very tight deadlines (I know we should never promise any tight deadlines when we are not sure, but what's done is done!)



Now that all deadlines passed and things are calm, everyone agreed to transform us into a productive TDD/BDD based team...Yay!



Now the question is about the code that we already have: (1) Is it still okay or good idea to stop most of the development and start writing whole possible test cases from the beginning, even though everything is working completely OKAY (yet!)? Or (2) it's better to wait for something bad happen and then during the fix write new unit tests, or (3) even forget about previous codes and just write unit tests for the new codes only and postpone everything to the next major refactor.



There are a few good and related articles such as this one. I'm still not sure if it worth to invest on this considering we have a very limited time and many other projects/works are waiting for us.



Note: This question is explaining/imagining a totally awkward situation in a development team. This is not about me or any of my collegues, its just an imaginary situation. You may think this should never happen or the development manager is responsible for such a mess! But anyway, what's done is done. If possible, please do not downvote just because you think this should never happen.










share|improve this question















Assume we have a large enterprise-level application without any unit/functional tests. There was no test-driven development process during the development due to very tight deadlines (I know we should never promise any tight deadlines when we are not sure, but what's done is done!)



Now that all deadlines passed and things are calm, everyone agreed to transform us into a productive TDD/BDD based team...Yay!



Now the question is about the code that we already have: (1) Is it still okay or good idea to stop most of the development and start writing whole possible test cases from the beginning, even though everything is working completely OKAY (yet!)? Or (2) it's better to wait for something bad happen and then during the fix write new unit tests, or (3) even forget about previous codes and just write unit tests for the new codes only and postpone everything to the next major refactor.



There are a few good and related articles such as this one. I'm still not sure if it worth to invest on this considering we have a very limited time and many other projects/works are waiting for us.



Note: This question is explaining/imagining a totally awkward situation in a development team. This is not about me or any of my collegues, its just an imaginary situation. You may think this should never happen or the development manager is responsible for such a mess! But anyway, what's done is done. If possible, please do not downvote just because you think this should never happen.







unit-testing testing tdd bdd acceptance-testing






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited 5 hours ago

























asked 11 hours ago









Michel Gokan

1248




1248












  • Possible duplicate of Do I need unit test if I already have integration test?
    – gnat
    11 hours ago










  • You should probably be preparing for the next time deadlines arrive and you're not allowed to do TDD any more. Possibly by telling whoever drove the last round of technical debtvelopment why that wasn't a great idea.
    – jonrsharpe
    11 hours ago










  • @gnat I think it is not a duplicate question. The mentioned team don't have any kind of tests (not even integration tests)
    – Michel Gokan
    11 hours ago










  • @gnat the questions is: what will happen to our new unit tests? They might seem incomplete, or even worthless without writing all unit tests for the previously written codes. The question that you mention does not cover this specific concern.
    – Michel Gokan
    10 hours ago




















  • Possible duplicate of Do I need unit test if I already have integration test?
    – gnat
    11 hours ago










  • You should probably be preparing for the next time deadlines arrive and you're not allowed to do TDD any more. Possibly by telling whoever drove the last round of technical debtvelopment why that wasn't a great idea.
    – jonrsharpe
    11 hours ago










  • @gnat I think it is not a duplicate question. The mentioned team don't have any kind of tests (not even integration tests)
    – Michel Gokan
    11 hours ago










  • @gnat the questions is: what will happen to our new unit tests? They might seem incomplete, or even worthless without writing all unit tests for the previously written codes. The question that you mention does not cover this specific concern.
    – Michel Gokan
    10 hours ago


















Possible duplicate of Do I need unit test if I already have integration test?
– gnat
11 hours ago




Possible duplicate of Do I need unit test if I already have integration test?
– gnat
11 hours ago












You should probably be preparing for the next time deadlines arrive and you're not allowed to do TDD any more. Possibly by telling whoever drove the last round of technical debtvelopment why that wasn't a great idea.
– jonrsharpe
11 hours ago




You should probably be preparing for the next time deadlines arrive and you're not allowed to do TDD any more. Possibly by telling whoever drove the last round of technical debtvelopment why that wasn't a great idea.
– jonrsharpe
11 hours ago












@gnat I think it is not a duplicate question. The mentioned team don't have any kind of tests (not even integration tests)
– Michel Gokan
11 hours ago




@gnat I think it is not a duplicate question. The mentioned team don't have any kind of tests (not even integration tests)
– Michel Gokan
11 hours ago












@gnat the questions is: what will happen to our new unit tests? They might seem incomplete, or even worthless without writing all unit tests for the previously written codes. The question that you mention does not cover this specific concern.
– Michel Gokan
10 hours ago






@gnat the questions is: what will happen to our new unit tests? They might seem incomplete, or even worthless without writing all unit tests for the previously written codes. The question that you mention does not cover this specific concern.
– Michel Gokan
10 hours ago












3 Answers
3






active

oldest

votes

















up vote
5
down vote














Is it still okay or good idea to stop most of the development and start writing whole possible test cases from the beginning [...] ?




Given legacy1 code, write unit tests in these situations:




  • when fixing bugs

  • when refactoring

  • when adding new functionality to the existing code


As useful as unit tests are, creating a complete unit test suite for an existing1 codebase probably isn't a realistic idea. The powers that be have pushed you to deliver on a tight deadline. They didn't allow you time to create adequate unit tests as you were developing. Do you think they will give you adequate time to create tests for the "program that works" ?



1Legacy code is the code without unite tests. This is the TDD definition of legacy code. It applies even if the legacy code is freshly delivered [even if the ink haven't dried yet].






share|improve this answer























  • But then our new unit tests for new features may seem incomplete, or even worthless without missing unit tests. Isn't it?
    – Michel Gokan
    11 hours ago






  • 1




    (1) Worthless? Certainly not. At the very least, they test the new feature. Next time somebody want to modify this feature, they will reuse much of the existing tests. (2) Incomplete? Maybe, maybe not. If you also create unit tests which test the legacy functionality that the new feature depends on, then the tests may be complete enough for practical purposes. In other words, do create additional unit tests that penetrate the legacy functionality. Penetrate to what depth? It depends on the program's architecture, resources available, institutional support.
    – Nick Alexeev
    10 hours ago


















up vote
3
down vote













In my experience, tests do not need total coverage to be helpful. Instead, you start reaping different kinds of benefits as coverage increases:




  • more than 30% coverage (aka a couple of integration tests): if your tests fail, something is extremely broken (or your tests are flaky). Thankfully the tests alerted you quickly! But releases will still require extensive manual testing.

  • more than 90% coverage (aka most of the components have superficial unit tests): if your tests pass, the software is likely mostly fine. The untested parts are edge cases, which is fine for non-critical software. But releases will still require some manual testing.

  • very high coverage of functions/statements/branches/requirements: you're living the TDD/BDD dream, and your tests are a precise reflection of the functionality of your software. You can refactor with high confidence, including large scale architectural changes. If the tests pass, your software is almost release ready, only some manual smoke testing required.


The truth is, if you don't start with BDD you're never going to get there, because the work required to test after coding is just excessive. The issue is not writing the tests, but more so being aware of actual requirements (rather than incidental implementation details) and being able to design the software in a way that is both functional and easy to test. When you write the tests first or together with the code, this is practically free.



Since new features require tests, but tests require design changes, but refactoring also requires tests, you have a bit of a chicken and egg problem. As your software creeps closer to decent coverage, you'll have to do do some careful refactoring in those parts of the code where new features occur, just to make the new features testable. This will slow you down a lot – initially. But by only refactoring and testing those parts where new development is needed, the tests also focus on that area where they are needed most. Stable code can continue without tests: if it were buggy, you'd have to change it anyway.



While you try adapting to TDD, a better metric than total project coverage would be the test coverage in parts that are being changed. This coverage should be very high right from the start, though it is not feasible to test all parts of the code that are impacted by a refactoring. Also, you do reap most of the benefits of high test coverage within the tested components. That's not perfect, but still fairly good.



Note that while unit tests seem to be common, starting with the smallest pieces is not a suitable strategy to get a legacy software under test. You'll want to start with integration tests that exercise a large chunk of the software at once. E.g. I've found it useful to extract integration test cases from real-world logfiles. Of course running such tests can take a lot of time, which is why you might want to set up an automated server that runs the tests regularly (e.g. a Jenkins server triggered by commits). The cost of setting up and maintaining such a server is very small compared to not running tests regularly, provided that any test failures actually get fixed quickly.






share|improve this answer




























    up vote
    3
    down vote














    There was no test-driven development process during the development due to very tight deadlines




    This statement is very concerning. Not because it means you developed without TDD or because you aren't testing everything. This is concerning because it shows you think TDD will slow you down and make you miss a deadline.



    As long as you see it this way you aren't ready for TDD. TDD isn't something you can gradually ease into. You either know how to do it or you don't. If you try doing it halfway you're going to make it and yourself look bad.



    TDD is something you should practice at home first. Learn to do it because it helps YOU code. Not because someone told you to do it. When it becomes something you do BECAUSE you're in a hurry then you're ready to do it professionally.



    TDD is something you can do in ANY shop. You don't even have to turn in your test code. You can keep it to yourself if the others disdain tests. The tests speed your development even if no one else runs them.



    On the other hand if others love and run your tests you should still keep in mind that even in a TDD shop it's not your job check in tests. It's to create proven working production code. If it happens to be testable, neat.




    (1) Is it still okay or good idea to stop most of the development and start writing whole possible test cases from the beginning, even though everything is working completely OKAY (yet!)? Or




    No. This is "let's do a complete rewrite" thinking. This destroys hard won knowledge.




    (2) it's better to wait for something bad happen and then during the fix write new unit tests, or



    (3) even forget about previous codes and just write unit tests for the new codes only and postpone everything to the next major refactor.




    I'll answer 2 and 3 the same way. When you change the code, for any reason, it's really nice if you can slip in a test. If the code is legacy it doesn't currently welcome a test. Which means it's hard to test it before changing it. Well since you're changing it anyway you can change it into something testable and test it.



    That's the nuclear option. It's risky. You're making changes without tests. There are some creative tricks to put legacy code under test before you change it. You look for what are called seams that allow you change the behavior of your code without changing the code. You change, config files, build files, whatever it takes.



    Michael Feathers gave us a book about this: Working Eeffectively with Legacy Code. Give it a read and you'll see that you don't have to burn down everything old to make something new.






    share|improve this answer























      Your Answer








      StackExchange.ready(function() {
      var channelOptions = {
      tags: "".split(" "),
      id: "131"
      };
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function() {
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled) {
      StackExchange.using("snippets", function() {
      createEditor();
      });
      }
      else {
      createEditor();
      }
      });

      function createEditor() {
      StackExchange.prepareEditor({
      heartbeatType: 'answer',
      convertImagesToLinks: false,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: null,
      bindNavPrevention: true,
      postfix: "",
      imageUploader: {
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      },
      onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      });


      }
      });














       

      draft saved


      draft discarded


















      StackExchange.ready(
      function () {
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsoftwareengineering.stackexchange.com%2fquestions%2f381996%2fis-it-a-good-idea-to-write-all-possible-test-cases-after-transform-the-team-to-t%23new-answer', 'question_page');
      }
      );

      Post as a guest















      Required, but never shown

























      3 Answers
      3






      active

      oldest

      votes








      3 Answers
      3






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes








      up vote
      5
      down vote














      Is it still okay or good idea to stop most of the development and start writing whole possible test cases from the beginning [...] ?




      Given legacy1 code, write unit tests in these situations:




      • when fixing bugs

      • when refactoring

      • when adding new functionality to the existing code


      As useful as unit tests are, creating a complete unit test suite for an existing1 codebase probably isn't a realistic idea. The powers that be have pushed you to deliver on a tight deadline. They didn't allow you time to create adequate unit tests as you were developing. Do you think they will give you adequate time to create tests for the "program that works" ?



      1Legacy code is the code without unite tests. This is the TDD definition of legacy code. It applies even if the legacy code is freshly delivered [even if the ink haven't dried yet].






      share|improve this answer























      • But then our new unit tests for new features may seem incomplete, or even worthless without missing unit tests. Isn't it?
        – Michel Gokan
        11 hours ago






      • 1




        (1) Worthless? Certainly not. At the very least, they test the new feature. Next time somebody want to modify this feature, they will reuse much of the existing tests. (2) Incomplete? Maybe, maybe not. If you also create unit tests which test the legacy functionality that the new feature depends on, then the tests may be complete enough for practical purposes. In other words, do create additional unit tests that penetrate the legacy functionality. Penetrate to what depth? It depends on the program's architecture, resources available, institutional support.
        – Nick Alexeev
        10 hours ago















      up vote
      5
      down vote














      Is it still okay or good idea to stop most of the development and start writing whole possible test cases from the beginning [...] ?




      Given legacy1 code, write unit tests in these situations:




      • when fixing bugs

      • when refactoring

      • when adding new functionality to the existing code


      As useful as unit tests are, creating a complete unit test suite for an existing1 codebase probably isn't a realistic idea. The powers that be have pushed you to deliver on a tight deadline. They didn't allow you time to create adequate unit tests as you were developing. Do you think they will give you adequate time to create tests for the "program that works" ?



      1Legacy code is the code without unite tests. This is the TDD definition of legacy code. It applies even if the legacy code is freshly delivered [even if the ink haven't dried yet].






      share|improve this answer























      • But then our new unit tests for new features may seem incomplete, or even worthless without missing unit tests. Isn't it?
        – Michel Gokan
        11 hours ago






      • 1




        (1) Worthless? Certainly not. At the very least, they test the new feature. Next time somebody want to modify this feature, they will reuse much of the existing tests. (2) Incomplete? Maybe, maybe not. If you also create unit tests which test the legacy functionality that the new feature depends on, then the tests may be complete enough for practical purposes. In other words, do create additional unit tests that penetrate the legacy functionality. Penetrate to what depth? It depends on the program's architecture, resources available, institutional support.
        – Nick Alexeev
        10 hours ago













      up vote
      5
      down vote










      up vote
      5
      down vote










      Is it still okay or good idea to stop most of the development and start writing whole possible test cases from the beginning [...] ?




      Given legacy1 code, write unit tests in these situations:




      • when fixing bugs

      • when refactoring

      • when adding new functionality to the existing code


      As useful as unit tests are, creating a complete unit test suite for an existing1 codebase probably isn't a realistic idea. The powers that be have pushed you to deliver on a tight deadline. They didn't allow you time to create adequate unit tests as you were developing. Do you think they will give you adequate time to create tests for the "program that works" ?



      1Legacy code is the code without unite tests. This is the TDD definition of legacy code. It applies even if the legacy code is freshly delivered [even if the ink haven't dried yet].






      share|improve this answer















      Is it still okay or good idea to stop most of the development and start writing whole possible test cases from the beginning [...] ?




      Given legacy1 code, write unit tests in these situations:




      • when fixing bugs

      • when refactoring

      • when adding new functionality to the existing code


      As useful as unit tests are, creating a complete unit test suite for an existing1 codebase probably isn't a realistic idea. The powers that be have pushed you to deliver on a tight deadline. They didn't allow you time to create adequate unit tests as you were developing. Do you think they will give you adequate time to create tests for the "program that works" ?



      1Legacy code is the code without unite tests. This is the TDD definition of legacy code. It applies even if the legacy code is freshly delivered [even if the ink haven't dried yet].







      share|improve this answer














      share|improve this answer



      share|improve this answer








      edited 11 hours ago

























      answered 11 hours ago









      Nick Alexeev

      1,2141817




      1,2141817












      • But then our new unit tests for new features may seem incomplete, or even worthless without missing unit tests. Isn't it?
        – Michel Gokan
        11 hours ago






      • 1




        (1) Worthless? Certainly not. At the very least, they test the new feature. Next time somebody want to modify this feature, they will reuse much of the existing tests. (2) Incomplete? Maybe, maybe not. If you also create unit tests which test the legacy functionality that the new feature depends on, then the tests may be complete enough for practical purposes. In other words, do create additional unit tests that penetrate the legacy functionality. Penetrate to what depth? It depends on the program's architecture, resources available, institutional support.
        – Nick Alexeev
        10 hours ago


















      • But then our new unit tests for new features may seem incomplete, or even worthless without missing unit tests. Isn't it?
        – Michel Gokan
        11 hours ago






      • 1




        (1) Worthless? Certainly not. At the very least, they test the new feature. Next time somebody want to modify this feature, they will reuse much of the existing tests. (2) Incomplete? Maybe, maybe not. If you also create unit tests which test the legacy functionality that the new feature depends on, then the tests may be complete enough for practical purposes. In other words, do create additional unit tests that penetrate the legacy functionality. Penetrate to what depth? It depends on the program's architecture, resources available, institutional support.
        – Nick Alexeev
        10 hours ago
















      But then our new unit tests for new features may seem incomplete, or even worthless without missing unit tests. Isn't it?
      – Michel Gokan
      11 hours ago




      But then our new unit tests for new features may seem incomplete, or even worthless without missing unit tests. Isn't it?
      – Michel Gokan
      11 hours ago




      1




      1




      (1) Worthless? Certainly not. At the very least, they test the new feature. Next time somebody want to modify this feature, they will reuse much of the existing tests. (2) Incomplete? Maybe, maybe not. If you also create unit tests which test the legacy functionality that the new feature depends on, then the tests may be complete enough for practical purposes. In other words, do create additional unit tests that penetrate the legacy functionality. Penetrate to what depth? It depends on the program's architecture, resources available, institutional support.
      – Nick Alexeev
      10 hours ago




      (1) Worthless? Certainly not. At the very least, they test the new feature. Next time somebody want to modify this feature, they will reuse much of the existing tests. (2) Incomplete? Maybe, maybe not. If you also create unit tests which test the legacy functionality that the new feature depends on, then the tests may be complete enough for practical purposes. In other words, do create additional unit tests that penetrate the legacy functionality. Penetrate to what depth? It depends on the program's architecture, resources available, institutional support.
      – Nick Alexeev
      10 hours ago












      up vote
      3
      down vote













      In my experience, tests do not need total coverage to be helpful. Instead, you start reaping different kinds of benefits as coverage increases:




      • more than 30% coverage (aka a couple of integration tests): if your tests fail, something is extremely broken (or your tests are flaky). Thankfully the tests alerted you quickly! But releases will still require extensive manual testing.

      • more than 90% coverage (aka most of the components have superficial unit tests): if your tests pass, the software is likely mostly fine. The untested parts are edge cases, which is fine for non-critical software. But releases will still require some manual testing.

      • very high coverage of functions/statements/branches/requirements: you're living the TDD/BDD dream, and your tests are a precise reflection of the functionality of your software. You can refactor with high confidence, including large scale architectural changes. If the tests pass, your software is almost release ready, only some manual smoke testing required.


      The truth is, if you don't start with BDD you're never going to get there, because the work required to test after coding is just excessive. The issue is not writing the tests, but more so being aware of actual requirements (rather than incidental implementation details) and being able to design the software in a way that is both functional and easy to test. When you write the tests first or together with the code, this is practically free.



      Since new features require tests, but tests require design changes, but refactoring also requires tests, you have a bit of a chicken and egg problem. As your software creeps closer to decent coverage, you'll have to do do some careful refactoring in those parts of the code where new features occur, just to make the new features testable. This will slow you down a lot – initially. But by only refactoring and testing those parts where new development is needed, the tests also focus on that area where they are needed most. Stable code can continue without tests: if it were buggy, you'd have to change it anyway.



      While you try adapting to TDD, a better metric than total project coverage would be the test coverage in parts that are being changed. This coverage should be very high right from the start, though it is not feasible to test all parts of the code that are impacted by a refactoring. Also, you do reap most of the benefits of high test coverage within the tested components. That's not perfect, but still fairly good.



      Note that while unit tests seem to be common, starting with the smallest pieces is not a suitable strategy to get a legacy software under test. You'll want to start with integration tests that exercise a large chunk of the software at once. E.g. I've found it useful to extract integration test cases from real-world logfiles. Of course running such tests can take a lot of time, which is why you might want to set up an automated server that runs the tests regularly (e.g. a Jenkins server triggered by commits). The cost of setting up and maintaining such a server is very small compared to not running tests regularly, provided that any test failures actually get fixed quickly.






      share|improve this answer

























        up vote
        3
        down vote













        In my experience, tests do not need total coverage to be helpful. Instead, you start reaping different kinds of benefits as coverage increases:




        • more than 30% coverage (aka a couple of integration tests): if your tests fail, something is extremely broken (or your tests are flaky). Thankfully the tests alerted you quickly! But releases will still require extensive manual testing.

        • more than 90% coverage (aka most of the components have superficial unit tests): if your tests pass, the software is likely mostly fine. The untested parts are edge cases, which is fine for non-critical software. But releases will still require some manual testing.

        • very high coverage of functions/statements/branches/requirements: you're living the TDD/BDD dream, and your tests are a precise reflection of the functionality of your software. You can refactor with high confidence, including large scale architectural changes. If the tests pass, your software is almost release ready, only some manual smoke testing required.


        The truth is, if you don't start with BDD you're never going to get there, because the work required to test after coding is just excessive. The issue is not writing the tests, but more so being aware of actual requirements (rather than incidental implementation details) and being able to design the software in a way that is both functional and easy to test. When you write the tests first or together with the code, this is practically free.



        Since new features require tests, but tests require design changes, but refactoring also requires tests, you have a bit of a chicken and egg problem. As your software creeps closer to decent coverage, you'll have to do do some careful refactoring in those parts of the code where new features occur, just to make the new features testable. This will slow you down a lot – initially. But by only refactoring and testing those parts where new development is needed, the tests also focus on that area where they are needed most. Stable code can continue without tests: if it were buggy, you'd have to change it anyway.



        While you try adapting to TDD, a better metric than total project coverage would be the test coverage in parts that are being changed. This coverage should be very high right from the start, though it is not feasible to test all parts of the code that are impacted by a refactoring. Also, you do reap most of the benefits of high test coverage within the tested components. That's not perfect, but still fairly good.



        Note that while unit tests seem to be common, starting with the smallest pieces is not a suitable strategy to get a legacy software under test. You'll want to start with integration tests that exercise a large chunk of the software at once. E.g. I've found it useful to extract integration test cases from real-world logfiles. Of course running such tests can take a lot of time, which is why you might want to set up an automated server that runs the tests regularly (e.g. a Jenkins server triggered by commits). The cost of setting up and maintaining such a server is very small compared to not running tests regularly, provided that any test failures actually get fixed quickly.






        share|improve this answer























          up vote
          3
          down vote










          up vote
          3
          down vote









          In my experience, tests do not need total coverage to be helpful. Instead, you start reaping different kinds of benefits as coverage increases:




          • more than 30% coverage (aka a couple of integration tests): if your tests fail, something is extremely broken (or your tests are flaky). Thankfully the tests alerted you quickly! But releases will still require extensive manual testing.

          • more than 90% coverage (aka most of the components have superficial unit tests): if your tests pass, the software is likely mostly fine. The untested parts are edge cases, which is fine for non-critical software. But releases will still require some manual testing.

          • very high coverage of functions/statements/branches/requirements: you're living the TDD/BDD dream, and your tests are a precise reflection of the functionality of your software. You can refactor with high confidence, including large scale architectural changes. If the tests pass, your software is almost release ready, only some manual smoke testing required.


          The truth is, if you don't start with BDD you're never going to get there, because the work required to test after coding is just excessive. The issue is not writing the tests, but more so being aware of actual requirements (rather than incidental implementation details) and being able to design the software in a way that is both functional and easy to test. When you write the tests first or together with the code, this is practically free.



          Since new features require tests, but tests require design changes, but refactoring also requires tests, you have a bit of a chicken and egg problem. As your software creeps closer to decent coverage, you'll have to do do some careful refactoring in those parts of the code where new features occur, just to make the new features testable. This will slow you down a lot – initially. But by only refactoring and testing those parts where new development is needed, the tests also focus on that area where they are needed most. Stable code can continue without tests: if it were buggy, you'd have to change it anyway.



          While you try adapting to TDD, a better metric than total project coverage would be the test coverage in parts that are being changed. This coverage should be very high right from the start, though it is not feasible to test all parts of the code that are impacted by a refactoring. Also, you do reap most of the benefits of high test coverage within the tested components. That's not perfect, but still fairly good.



          Note that while unit tests seem to be common, starting with the smallest pieces is not a suitable strategy to get a legacy software under test. You'll want to start with integration tests that exercise a large chunk of the software at once. E.g. I've found it useful to extract integration test cases from real-world logfiles. Of course running such tests can take a lot of time, which is why you might want to set up an automated server that runs the tests regularly (e.g. a Jenkins server triggered by commits). The cost of setting up and maintaining such a server is very small compared to not running tests regularly, provided that any test failures actually get fixed quickly.






          share|improve this answer












          In my experience, tests do not need total coverage to be helpful. Instead, you start reaping different kinds of benefits as coverage increases:




          • more than 30% coverage (aka a couple of integration tests): if your tests fail, something is extremely broken (or your tests are flaky). Thankfully the tests alerted you quickly! But releases will still require extensive manual testing.

          • more than 90% coverage (aka most of the components have superficial unit tests): if your tests pass, the software is likely mostly fine. The untested parts are edge cases, which is fine for non-critical software. But releases will still require some manual testing.

          • very high coverage of functions/statements/branches/requirements: you're living the TDD/BDD dream, and your tests are a precise reflection of the functionality of your software. You can refactor with high confidence, including large scale architectural changes. If the tests pass, your software is almost release ready, only some manual smoke testing required.


          The truth is, if you don't start with BDD you're never going to get there, because the work required to test after coding is just excessive. The issue is not writing the tests, but more so being aware of actual requirements (rather than incidental implementation details) and being able to design the software in a way that is both functional and easy to test. When you write the tests first or together with the code, this is practically free.



          Since new features require tests, but tests require design changes, but refactoring also requires tests, you have a bit of a chicken and egg problem. As your software creeps closer to decent coverage, you'll have to do do some careful refactoring in those parts of the code where new features occur, just to make the new features testable. This will slow you down a lot – initially. But by only refactoring and testing those parts where new development is needed, the tests also focus on that area where they are needed most. Stable code can continue without tests: if it were buggy, you'd have to change it anyway.



          While you try adapting to TDD, a better metric than total project coverage would be the test coverage in parts that are being changed. This coverage should be very high right from the start, though it is not feasible to test all parts of the code that are impacted by a refactoring. Also, you do reap most of the benefits of high test coverage within the tested components. That's not perfect, but still fairly good.



          Note that while unit tests seem to be common, starting with the smallest pieces is not a suitable strategy to get a legacy software under test. You'll want to start with integration tests that exercise a large chunk of the software at once. E.g. I've found it useful to extract integration test cases from real-world logfiles. Of course running such tests can take a lot of time, which is why you might want to set up an automated server that runs the tests regularly (e.g. a Jenkins server triggered by commits). The cost of setting up and maintaining such a server is very small compared to not running tests regularly, provided that any test failures actually get fixed quickly.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered 10 hours ago









          amon

          82.8k21158243




          82.8k21158243






















              up vote
              3
              down vote














              There was no test-driven development process during the development due to very tight deadlines




              This statement is very concerning. Not because it means you developed without TDD or because you aren't testing everything. This is concerning because it shows you think TDD will slow you down and make you miss a deadline.



              As long as you see it this way you aren't ready for TDD. TDD isn't something you can gradually ease into. You either know how to do it or you don't. If you try doing it halfway you're going to make it and yourself look bad.



              TDD is something you should practice at home first. Learn to do it because it helps YOU code. Not because someone told you to do it. When it becomes something you do BECAUSE you're in a hurry then you're ready to do it professionally.



              TDD is something you can do in ANY shop. You don't even have to turn in your test code. You can keep it to yourself if the others disdain tests. The tests speed your development even if no one else runs them.



              On the other hand if others love and run your tests you should still keep in mind that even in a TDD shop it's not your job check in tests. It's to create proven working production code. If it happens to be testable, neat.




              (1) Is it still okay or good idea to stop most of the development and start writing whole possible test cases from the beginning, even though everything is working completely OKAY (yet!)? Or




              No. This is "let's do a complete rewrite" thinking. This destroys hard won knowledge.




              (2) it's better to wait for something bad happen and then during the fix write new unit tests, or



              (3) even forget about previous codes and just write unit tests for the new codes only and postpone everything to the next major refactor.




              I'll answer 2 and 3 the same way. When you change the code, for any reason, it's really nice if you can slip in a test. If the code is legacy it doesn't currently welcome a test. Which means it's hard to test it before changing it. Well since you're changing it anyway you can change it into something testable and test it.



              That's the nuclear option. It's risky. You're making changes without tests. There are some creative tricks to put legacy code under test before you change it. You look for what are called seams that allow you change the behavior of your code without changing the code. You change, config files, build files, whatever it takes.



              Michael Feathers gave us a book about this: Working Eeffectively with Legacy Code. Give it a read and you'll see that you don't have to burn down everything old to make something new.






              share|improve this answer



























                up vote
                3
                down vote














                There was no test-driven development process during the development due to very tight deadlines




                This statement is very concerning. Not because it means you developed without TDD or because you aren't testing everything. This is concerning because it shows you think TDD will slow you down and make you miss a deadline.



                As long as you see it this way you aren't ready for TDD. TDD isn't something you can gradually ease into. You either know how to do it or you don't. If you try doing it halfway you're going to make it and yourself look bad.



                TDD is something you should practice at home first. Learn to do it because it helps YOU code. Not because someone told you to do it. When it becomes something you do BECAUSE you're in a hurry then you're ready to do it professionally.



                TDD is something you can do in ANY shop. You don't even have to turn in your test code. You can keep it to yourself if the others disdain tests. The tests speed your development even if no one else runs them.



                On the other hand if others love and run your tests you should still keep in mind that even in a TDD shop it's not your job check in tests. It's to create proven working production code. If it happens to be testable, neat.




                (1) Is it still okay or good idea to stop most of the development and start writing whole possible test cases from the beginning, even though everything is working completely OKAY (yet!)? Or




                No. This is "let's do a complete rewrite" thinking. This destroys hard won knowledge.




                (2) it's better to wait for something bad happen and then during the fix write new unit tests, or



                (3) even forget about previous codes and just write unit tests for the new codes only and postpone everything to the next major refactor.




                I'll answer 2 and 3 the same way. When you change the code, for any reason, it's really nice if you can slip in a test. If the code is legacy it doesn't currently welcome a test. Which means it's hard to test it before changing it. Well since you're changing it anyway you can change it into something testable and test it.



                That's the nuclear option. It's risky. You're making changes without tests. There are some creative tricks to put legacy code under test before you change it. You look for what are called seams that allow you change the behavior of your code without changing the code. You change, config files, build files, whatever it takes.



                Michael Feathers gave us a book about this: Working Eeffectively with Legacy Code. Give it a read and you'll see that you don't have to burn down everything old to make something new.






                share|improve this answer

























                  up vote
                  3
                  down vote










                  up vote
                  3
                  down vote










                  There was no test-driven development process during the development due to very tight deadlines




                  This statement is very concerning. Not because it means you developed without TDD or because you aren't testing everything. This is concerning because it shows you think TDD will slow you down and make you miss a deadline.



                  As long as you see it this way you aren't ready for TDD. TDD isn't something you can gradually ease into. You either know how to do it or you don't. If you try doing it halfway you're going to make it and yourself look bad.



                  TDD is something you should practice at home first. Learn to do it because it helps YOU code. Not because someone told you to do it. When it becomes something you do BECAUSE you're in a hurry then you're ready to do it professionally.



                  TDD is something you can do in ANY shop. You don't even have to turn in your test code. You can keep it to yourself if the others disdain tests. The tests speed your development even if no one else runs them.



                  On the other hand if others love and run your tests you should still keep in mind that even in a TDD shop it's not your job check in tests. It's to create proven working production code. If it happens to be testable, neat.




                  (1) Is it still okay or good idea to stop most of the development and start writing whole possible test cases from the beginning, even though everything is working completely OKAY (yet!)? Or




                  No. This is "let's do a complete rewrite" thinking. This destroys hard won knowledge.




                  (2) it's better to wait for something bad happen and then during the fix write new unit tests, or



                  (3) even forget about previous codes and just write unit tests for the new codes only and postpone everything to the next major refactor.




                  I'll answer 2 and 3 the same way. When you change the code, for any reason, it's really nice if you can slip in a test. If the code is legacy it doesn't currently welcome a test. Which means it's hard to test it before changing it. Well since you're changing it anyway you can change it into something testable and test it.



                  That's the nuclear option. It's risky. You're making changes without tests. There are some creative tricks to put legacy code under test before you change it. You look for what are called seams that allow you change the behavior of your code without changing the code. You change, config files, build files, whatever it takes.



                  Michael Feathers gave us a book about this: Working Eeffectively with Legacy Code. Give it a read and you'll see that you don't have to burn down everything old to make something new.






                  share|improve this answer















                  There was no test-driven development process during the development due to very tight deadlines




                  This statement is very concerning. Not because it means you developed without TDD or because you aren't testing everything. This is concerning because it shows you think TDD will slow you down and make you miss a deadline.



                  As long as you see it this way you aren't ready for TDD. TDD isn't something you can gradually ease into. You either know how to do it or you don't. If you try doing it halfway you're going to make it and yourself look bad.



                  TDD is something you should practice at home first. Learn to do it because it helps YOU code. Not because someone told you to do it. When it becomes something you do BECAUSE you're in a hurry then you're ready to do it professionally.



                  TDD is something you can do in ANY shop. You don't even have to turn in your test code. You can keep it to yourself if the others disdain tests. The tests speed your development even if no one else runs them.



                  On the other hand if others love and run your tests you should still keep in mind that even in a TDD shop it's not your job check in tests. It's to create proven working production code. If it happens to be testable, neat.




                  (1) Is it still okay or good idea to stop most of the development and start writing whole possible test cases from the beginning, even though everything is working completely OKAY (yet!)? Or




                  No. This is "let's do a complete rewrite" thinking. This destroys hard won knowledge.




                  (2) it's better to wait for something bad happen and then during the fix write new unit tests, or



                  (3) even forget about previous codes and just write unit tests for the new codes only and postpone everything to the next major refactor.




                  I'll answer 2 and 3 the same way. When you change the code, for any reason, it's really nice if you can slip in a test. If the code is legacy it doesn't currently welcome a test. Which means it's hard to test it before changing it. Well since you're changing it anyway you can change it into something testable and test it.



                  That's the nuclear option. It's risky. You're making changes without tests. There are some creative tricks to put legacy code under test before you change it. You look for what are called seams that allow you change the behavior of your code without changing the code. You change, config files, build files, whatever it takes.



                  Michael Feathers gave us a book about this: Working Eeffectively with Legacy Code. Give it a read and you'll see that you don't have to burn down everything old to make something new.







                  share|improve this answer














                  share|improve this answer



                  share|improve this answer








                  edited 1 hour ago

























                  answered 5 hours ago









                  candied_orange

                  49.7k1686173




                  49.7k1686173






























                       

                      draft saved


                      draft discarded



















































                       


                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function () {
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsoftwareengineering.stackexchange.com%2fquestions%2f381996%2fis-it-a-good-idea-to-write-all-possible-test-cases-after-transform-the-team-to-t%23new-answer', 'question_page');
                      }
                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      Le Mesnil-Réaume

                      Ida-Boy-Ed-Garten

                      web3.py web3.isConnected() returns false always