Nice review. I think the most important of these is setting ids in your fixtures which helps keep parameterisations readable when there are a lot of them. Once you go down the path of parameterising fixtures you can end with with tests that have a lot of parameters and it can be hard to tell what’s failed at a glance.
A few more…though perhaps people know all of these:
If you set log_level=DEBUG in your pytest config you will get logging output formatted from the test run which can help debug failures. If you need to print stuff to debug a test failure perhaps it’s something for a log (possibly at DEBUG level). I always set this config first on every project because logs you use in testing are often handy in prod.
--ff (failures first), -x (stop on first failure) and --pdb (enter debugger on assertion fail or exception), ‘-k xyz’ (run tests matching “xyz”) are key flags to use (--sw is handy when doing big refactorings). Having your test enter the debugger when something goes wrong makes debugging really easy.
Plugins are extremely easy to write and to begin with you can just put them into your conftest.py. I once developed a plugin for a television set top box marker than send test-run telemetry to a REST API for posterity (among other things).
The caplog fixture allows for asserting log messages were logged - important for testing error handling in server software. I wouldn’t assert on every log but there are certain circumstances where you want to ensure that your important message is being logged.
And one other tip: tear down the database before your tests, not at the end. If the tests fail it’s nice to be able to inspect the state afterwards…
Great tool, probably a bigger reason why I like working in Python.
Good ones, thanks for the list! As you already started, here are a few more gems that might be interesting :)
A library that prevents internet usage during test execution.
A library that brings you a mocker fixture which enables very convenient mocking.
Visually more appealing test results and early failure report (you don’t have to wait for all your tests to finish to already see the error traces).
Automatically applies fixtures to test cases. This is very convenient when you have certain actions you don’t want to have enabled in your test suite (i.e. mocking certain API calls or expensive document generations).
Didn’t know about pytest-socket, will have to try it out.
For mocking I usually just use the new unittest.mock stdlib module. In the past would use pytest’s monkeypatch fixture.
Another great plugin is pytest-cov which outputs code coverage reports.
I haven’t used monkeypatch, but I do like mocker. It mirrors the unittest.mock API except it’s a pytest fixture. I find it helps cleanup tests significantly.
Another pattern that I do, similar to parameterization, is to have the fixture return a function. e.g.
bar_42 = bar(42)
bar_57 = bar(57)
assert bar_42.thingy() == (bar_57.thingy() - 15)
bar_mock = Bar()
mocker.patch.object(bar_mock, 'thingy', return_value=value)
I agree that mocker yields a cleaner interface. It just happened too often to me that tests failed in mysterious ways until I realised that the signature of my test needs to change because I added or removed a mock.patch decorator…
I like the idea of returning a function in the fixture. I’ll see if I can put that to use somehow :)
Thanks for that, I have only dabbled in pytest so far, but was pleasantly surprised by the basic features already, so that was an interesting read.