Alternative to Test::NoWarnings

The Test::NoWarnings module is very helpful for detecting subtle errors in your code. Unfortunately, by default, it does its reporting in an END block, which doesn't play nicely with Test::More's done_testing() function, which I use a lot. There has been an open ticket since 2009 and nothing has been done.

There is a workaround, but it's a lot more verbose than I'd like:

use Test::More;
use Test::NoWarnings (); # skip import

# ... tests ...

Test::NoWarnings::had_no_warnings;
done_testing;

That's OK, but annoying to put over and over again in every .t file. (Even if I automate it with an editor macro).

Today I released a much simpler alternative called Test::FailWarnings. It just hooks $SIG{__WARN__} and turns warnings into Test::More fail() calls. This works well with done_testing(), because you can't plan for these being called or not.

Also, it issues the failures as they happen, instead of storing them up for the end. That's either a feature or a bug, depending on how you like to debug your failures.

Here's the obligatory example from the synopsis:

use strict;
use warnings;
use Test::More;
use Test::FailWarnings;
 
ok( 1, "first test" );
ok( 1 + "lkadjaks", "add non-numeric" );
 
done_testing;

When run, it produces output like this:

ok 1 - first test
not ok 2 - Caught warning
#   Failed test 'Caught warning'
#   at t/bin/main-warn.pl line 7.
# Warning was 'Argument "lkadjaks" isn't numeric in addition (+) at t/bin/main-warn.pl line 7.'
ok 3 - add non-numeric
1..3
# Looks like you failed 1 test of 3.

I'm sure there are many ways in which Test::FailWarnings is too simplistic, (I welcome ideas or contributions), but I'm tired of waiting for a fix and this meets my needs now.

This entry was posted in perl programming, toolchain and tagged , , , . Bookmark the permalink. Both comments and trackbacks are currently closed.

16 Comments

  1. Tommy
    Posted February 26, 2013 at 9:01 am | Permalink

    Yes! So much yes!

    Test::NoWarnings can be such a pain for these reasons and ::More. Saving up warnings until the END {} is also very unhelpful. If you use the newer ':early' pragma it breaks a lot of tests unless you specify a version new enough to support it in your prereqs. When exactly did it become a feature? There are NoWarnings about that in the documentation, so it becomes easier just to leave it out. I could go on and on about the way that module drives me crazy...

    • Posted February 26, 2013 at 9:06 am | Permalink

      Thank you! Glad you like it!

      • Tommy
        Posted February 26, 2013 at 11:54 am | Permalink

        I actually should have mentioned that my main complaint with Test::NoWarnings is that you can't skip_all. You just can't.

        Please forgive me for pasting in code, but this really is the pain we have to go through, which you might also want to mention in your writeup as to why Test::NoWarnings is such a huge pain:

        27 {
        28 local $@;
        29
        30 CORE::eval 'use Test::Fatal';
        31
        32 if ( $@ )
        33 {
        34 plan skip_all => 'Need Test::Fatal to run these tests';
        35 }
        36 else
        37 {
        38 require Test::Fatal;
        39
        40 Test::Fatal->import( qw( exception dies_ok lives_ok ) );
        41
        42 plan tests => 36;
        43
        44 CORE::eval <<'__TEST_NOWARNINGS__';
        45 use Test::NoWarnings;
        46 __TEST_NOWARNINGS__
        47 }

        • Posted March 10, 2013 at 1:49 pm | Permalink

          I've used Test::NoWarnings successfully in combination with skip_all (or a wrapper which does the same thing, Test::Requires) -- you just need to carefully order which modules load first.

          FWIW, Test::Warnings (freshly uploaded yesterday - https://metacpan.org/release/ETHER/Test-Warnings-0.001-TRIAL ) handles skip_all no matter what order you load things in; I've just added a test to confirm this.

  2. Michael J
    Posted February 27, 2013 at 5:36 am | Permalink

    This is great, we'd certainly switch from Test::NoWarnings providing there was a way to skip certain known warnings. There are several noisy modules on CPAN unlikely to patched in the near future (e.g. PDF::API2), which prevent a blanket check that no warnings were generated.

    Possibly something like:

    our @WARNINGS_TO_IGNORE;

    $SIG{__WARN__} = sub {
    my $msg = shift;
    $msg = '' unless defined $msg;
    chomp $msg;

    foreach my $ignore (@WARNINGS_TO_IGNORE) {
    return if $msg =~ m{$ignore};
    }

    # ...
    };

    and then:

    # test.t

    use Test::Most;
    use Test::FailWarnings;

    @Test::FailWarnings::WARNINGS_TO_IGNORE
    = ( 'warn warn warn', qr/blah blah/, );

    sub test {
    warn shift;
    return 1;
    }

    ok test("warn warn warn");
    ok test("blah blah");
    ok test("unexpected warning!!!!");

    done_testing();

    thanks!

  3. Posted February 27, 2013 at 6:53 am | Permalink

    Would this be enough? https://github.com/dagolden/test-failwarnings/issues/1

    If not, please open up another issue with your idea and/or send a pull request.

    • Michael J
      Posted February 27, 2013 at 12:54 pm | Permalink

      thanks, but that wouldn't be good enough for us, I don't think. Warnings from other dependencies are certainly valid (and might not be caught by their own test suites), but warnings from CPAN modules that are unlikely to get fixed that are the problem. Using PDF::API2 spews hundreds of warnings from TrueType font packages in certain cases. I'll add another issue for this, and a pull, if you think the patch makes sense?

      • Posted February 27, 2013 at 2:20 pm | Permalink

        Seems sensible. Do you care about the *warning* or the *source*? E.g. @MODULES_TO_IGNORE?

        I might want it done via a class method: Test::FailWarnings->ignore_patterns( ... )

        That would allow some validation (qr or string) and could do an internal push so it could be called more than once.

        Please open up a ticket and we can iterate design ideas there.

      • Posted March 10, 2013 at 5:30 pm | Permalink

        I've released 0.002 with this feature.

  4. Posted March 4, 2013 at 10:02 am | Permalink

    Why introduce a new module? Why not just take over Test::NoWarnings? Adam has never balked at anyone asking for commit bits to modules.

    • Posted March 4, 2013 at 12:52 pm | Permalink

      I already have a commit bit to Adam's repo.

      I think Test::NoWarnings has the wrong paradigm and I judged it faster to whip up a replacement with a better paradigm than figure out how to fix the old one.

      Looking back, I did a proof of concept in a temp directory around 5pm, created the git repo for the dist around 5:15pm and shipped to CPAN around 5:45pm.

      In other cases, like PPI::XS, I will fix things or take co-maint when I judge that the faster path to progress.

      • Tommy
        Posted March 4, 2013 at 1:14 pm | Permalink

        "Looking back, I did a proof of concept in a temp directory around 5pm, created the git repo for the dist around 5:15pm and shipped to CPAN around 5:45pm."

        Dang. Like a boss!

        Not to touch off a firestorm here, but I think you made the right call, David. Without coming out and saying "it's broken", I'll just say that Test::NoWarnings is... what it is, and what it is is not what I need. I need to be able to use something that tracks down warnings, without adding to my test count-- thereby NOT making it impossible to skip_all when I have a platform-specific or release-only test sequence.

        Lesser complaints are still that I don't need something saving up all my warnings until the "END {}" like a bitter ex wife. That's not a feature! I prefer to know about the issues as they happen, before they fester and explode in a fiery ball of overly verbose output that is so prolix that it actually obscures the error you were trying to hunt down.

        By the way, the captchas are waaaay to hard to read.

        • Posted March 4, 2013 at 1:20 pm | Permalink

          Sorry about the captcha's. It's just some WP plugin. :-)

      • Posted March 4, 2013 at 1:29 pm | Permalink

        That makes sense now I know. I don't disagree that it's broken. I just wonder if it could somehow even be fixed via a re-write.

  5. Steffen Winkler
    Posted March 9, 2013 at 12:33 am | Permalink

    Tests with done_testing showing lazyness. It is unsecure if the run of a test depends on conditions. We prevent done_testing and have no problems in our big project with Test::NoWarnings because we plan. Sometimes we call plan later to calculate the count of tests first. We write special OO test module for things we need more then one. You can ask after construction, how many tests the module would add.

    • Posted March 9, 2013 at 7:02 am | Permalink

      Steffen, sounds like you're happy with what you've got. That's fine. Test::FailWarnings is not for you.

© 2009-2014 David Golden All Rights Reserved