The primitive test library was one of the things I really disliked about Go from the start. (I spent about two years working in Go, 2013-2015.)
The arguments in this post against using an assertion library seem to come down to “it takes effort to choose one” and “but people might choose incompatible ones” (both of which are fallout from Go not including one in the first place), “but you can write tests without one” (which is a form of the Appeal To Turing-Completeness fallacy), and “after a few months you won’t miss assertions” (which is Stockholm syndrome, and was false in my case.)
This was then rewritten in plain go test in a very imperative style … 41 lines, but only 646 Chars typed - just 120 chars more than the assert lib.
I’m not a Go programmer, but yikes. This seems to overfocus on the “not that much worse” character or line cost of doing it “the plain go test” way, but ignore that approaches that involve churning out mass amounts of boilerplate if err != nil then print “got wrong Foobar” is, at best, a gigantic waste of everyone’s time, and is more likely a rich source of inconsistent error formats (making wrapping automated tooling around your tests unnecessarily difficult) & copy-paste errors as someone gets tired of typing the same thing over and over and pastes print “got wrong Foobar” under the BazBang variable’s test, instead.
The assert DSL library at least appears to assure that the error messages are consistently formatted and match what was tested – which is the absolute bare minimum I’d want out of even the most primitive of testing libraries.
Which isn’t to say that testify has a good API – I’ve never used it. But anytime you’re paying people to churn out boilerplate instead of letting the computer do it for you, you’ve significantly misunderstood what computers are actually good for.
Which brings me back to:
But then actually once you have assert.Equals dotted throughout the tests the cost of removing it becomes unaffordable. Once you commit to one of the assertion libs it becomes hard to reverse that change. In that respect it is an expensive decision.
Given that the argument order of assert.Equals et al is documented and the “plain Go” way appears to be extremely rote boilerplate, if you decide you don’t like the library transforming the former into the latter via a small script seems…extremely uncostly and straightforward? There’s no reason to be doing this by hand.
Working in a large Go codebase with thousands of tests written using testify, I have absolutely no desire to switch everything to t.Error() to avoid whatever the pitfalls of testify that the author seems horrified by. The tests run extremely fast with single-line asserts without my having to implement every single eq type. Consider:
resp, err := SomeGRPCCall()
require.NoError(t, err, "unexpected error making the call") // fail the test immediately with a useful message if there is an error
assert.Equal(t, expectedThis, resp.GetThis(), "expected this to match")
assert.Equal(t, expectedThat, resp.GetThat(), "expected that to match")
With perhaps half a dozen or more fields in some gRPC responses. This could easily double or more in size without any discernible advantage in performance or clarity, and with oodles of boilerplate.
Some of the matchers feel a little too rspec-y for me (assert.NotZero() for example) but generally it provides a very nice way to avoid writing a LOT of code. It’s also nice to be able to differentiate between assert and require, so that if you break a test in a way that doesn’t segfault, you can see all the assert failures in a single run.
With respect to type safety, I agree that the type conversion approach is suboptimal, but this is plainly a problem with Go’s lack of generics (imo). I would love the type system to allow me to say assert.Equals(t, this, that) and fail to compile if the types did not match; I do not have any interest whatsoever in something like myCustomEqStrings(this, that) and myCustomEqInt64(this, that) and so on ad infinitum in a production environment in which I am actually expected to deliver working software from time to time.
If you are happy with testify but looking for a decent way to mock/fake calls made to your dependencies, we have found counterfeiter to be worth the while. You can declare an interface for your dependencies (good practice anyway) and then generate fakes against it to simulate responses of various types. Very useful if you work with microservices.
The reason I started using testify is because the error messages were better than the built in messages, and I didn’t like repeating the if statement condition in every error message. I’m not sure if this is still the case though.
One thing I don’t like about the testify lib is the lack of consistency on parameter order (actual and expected.)
I like using libs like testify for the same reason, when a test fails, the output is helpful. Multiline strings are diffed. JSON inconsistencies are highlighted. Nested values with subtle differences are compared. It’s those features that make a huge difference.
I think testifys has evolved over time in ways it shouldn’t, like inconsistent arguments and functions that bloat the API, but it’s still great imo.
Out of my own desire to explore ideas I’ve been building my own assertion library, inspired by testify, minimalistic, but useful for the apps I build, https://4d63.com/test. I don’t expect to build something better, but to understand the tradeoffs, decisioning process, and how this stuff works.
I never interpreted the standard library testing package as having the intention that it is supposed to be the entirety of your testing apparatus, but only that it is supposed to be the entry point to your testing apparatus, so that there was a universal standard for how you would run the tests in a Go program. I do all my testing in a manner that would allow people to use the standard go test invocations like go test -run to select specific tests or go test -count to run a given test many times, but internally there’s, you know, more tooling and library support to make the ergonomics of writing tests better. It’s seems super odd and weirdly dogmatic to say “you should not use any assertions library at all”.
(moved from a response to a root comment because it’s not really a response to the comment so much as it is a response so the article)
I just wanted to thank you for pointing out this lib.
For some reason I hadn’t run across it yet.
I easily swapped out testify with this using the auto-migration tool on a couple of my projects. Worked great!
My experience is that pure Go tests are obnoxious to write and debug. You have to tediously format debugging information yourself every time instead of having assert functions do that for you. That in turn makes failing tests harder to debug when the person before you didn’t bother doing it, or did so inconsistently.
Worse, I’ve noticed that Go tests tend to assert the final results, but little in between. My tests usually assert every step of the way so I don’t have to spend time figuring out what went wrong. Perhaps if Go tests had a convenient assert, people would assert more.
Asserts aren’t just for tests either. I use debug asserts all over my code. Asserts catch bad assumptions early, document those assumptions, and maintain their own freshness (they’ll fail when they’re wrong, unlike comments).
Interesting, I do test intermediate state, but mostly errors.
Formatting never was an issue for me. Sometimes comparing complex structs may be tedious. If reflect.DeepEqual does not help, than I use github.com/google/go-cmp and that is it.
Other than that I do not have any issues with the testing package.
When a test is failing, do you ever printf debug? At some point I noticed that the majority of the time I add prints (or check values in a debugger), I’m looking for something specific that indicates the value is right or wrong. Asserts work perfectly for that, in tests or normal code.
Hidden in the depths of this article is this statement, which I think is important, but the rest of the content does not really reflect this more moderate view.
It really isn’t such a big deal in any case, either way is OK, each team should make the call, but once it has been made it should be kept consistent.
And the argument about introducing dependencies with very large API surfaces, like GoConvey, is also something to consider. But Testify hits the sweet spot in my opinion - the utility and consistency it provides is high while the API surface is fairly small. Sure, it’s not perfect, but then neither are the counter examples in this article.
In the worse case even after working with plain go test, some developers really struggle to understand or accept why they should not use their choice of helpers, occasionally attempting to sway opinion by challenging the intelligence and integrity of the team with accusations of cult like behaviour (cargo or otherwise). “Kool-aid” gets mentioned more than once.
These people are not wrong. A cult is a derived sense of “excessive devotion”, and this is exactly that. I’ve written a lot of Go and I love many things about the Go community, but I think a touch more pragmatism and a touch less slavish adherence to Go aphorisms would go a long way to making the language more attractive to more people.
More concretely, the initial Testify example is completely non-idiomatic and thus the argument about line length and increased cognitive overhead is based on a false premise. In reality idiomatic Testify would look something like the below, which is extremely concise, easy to understand, and the same pattern can be repeated for almost any tests without having to write a bespoke 30-line comparison function for each comparison. Additionally it gives you very useful error output in the case where it fails, including a diff of the expected and actual structures, unlike the proposed alternative.
There is another semantic indirection in the assertion lib, something of a mini DSL to learn:
This makes zero sense. The argument that learning testify/assert once is somehow worse than comprehending potentially hundreds of bespoke assertion functions is nuts to me. Additionally, each bespoke assertion function has the potential to introduce its own subtle bugs, vs. Testify’s millions of usages shaking bugs out.
Also the statement that the following both pass is just completely false.
The final solution also doesn’t provide as much utility as Testify because the following snippet doesn’t tell you what the expected value is! Ironically, this is almost a perfect argument for why Testify is a strict improvement over ad-hoc functions like these.
if got.CountryCode != cc {
t.Error("got wrong CountryCode", got.CountryCode)
}
This blog post is absolutely great, and I use it as a starting point for a discussion about why not to use a custom testing interface (most of the time testify).
I feel very strong about testify. Negatively. Its use is a red light for me and a sign that the author might not think but rather force known from somewhere else solutions.
I feel very strong about testify. Negatively. Its use is a red light for me and a sign that the author might not think but rather force known from somewhere else solutions.
This is a pretty strong statement regarding the usage of a test helper libary.
But if you think someone writing their own WithinDuration test helper method, for each library they write, is a good use of their time, and is somehow a signal for overall project quality (or of somehow not thinking?!), then I guess more power to you.
Sure. If time is worth nothing, then writing a set of helpers for every project is fine.
Oh, but then maybe you could share it across projects to save a bit of time/effort for any new project you start.
Maybe even open source it? Other people might even be interested in using it!
Oh wait.. Now we are back to square zero with it being bad?
That said, if you only need one (or a few, or several even) function, then sure. I agree that copying it around is better than adding a dependency. But if you ever reach a point where you have to update more than one project to add or fix a helper, then you are probably better off making it a dependency.
But use of a test helper library as somehow being a red light for overall project quality, sure seems dubious to me.
Sure. If time is worth nothing, then writing a set of helpers for every project is fine.
The time it takes me to write those helpers is, without exaggeration, less than the time I spend waiting for VS Code to do whatever action in 1 day. It doesn’t enter into the cost accounting. The cost of a dependency, on the other hand, is real, and significant, and perpetual.
I once heard a good rule of thumb: never import anything you could write in an afternoon. Assert is well below that threshold.
But if you ever reach a point where you have to update more than one project to add or fix a helper, then you are probably better off making it a dependency.
The only reason to add a helper to a project is if you need it; the only reason to update a helper in a project is if it’s causing problems in that project. There’s no situation I can think of where you have a bunch of similar/identical helpers in a bunch of projects you own/maintain, and you need to update them all.
I’ve been using Go since before 1.0 was released. I have a lot of experience using the reflect package. I’m pretty sure I couldn’t write a good set of assert helpers in an afternoon.
The funny thing here is that nobody seems to acknowledge that the assert helpers aren’t just about deleting some if statements. It’s also about the messages you get when a test fails. A good assert helper will print a nice diff for you between expected and actual values.
testify is pretty dang close to what I would write. And while some dependencies have a perpetual cost, I’ve not experienced that with testify specifically.
I usually like the “Go Way” of doing things, but this particular position is pretty Out There IMO.
The funny thing here is that nobody seems to acknowledge that the assert helpers aren’t just about deleting some if statements. It’s also about the messages you get when a test fails. A good assert helper will print a nice diff for you between expected and actual values.
I don’t see much value in rich assertion failure messages, most of the time. Literally this and nothing more is totally sufficient for 80% of projects.
func Assertf(t *testing.T, b bool, format string, args ...interface{}) {
t.Helper()
if !b {
t.Errorf(format, args...)
}
}
You’re going to have a hell of a time debugging that on CI when all you have is “foobar equality failed” with no indication of what the unexpected value was to help you puzzle out why it works on your machine but not the CI server.
I mean, more power to you but I’m not out to make my job any harder than it has to be. “expected: “test string” received: “TODO set this value before pushing test config”” is too easy a win for me to ignore, and god help you when the strings are piles of JSON instead. Then you’re really going to want CI to give you that diff.
You’re going to have a hell of a time debugging that on CI when all you have is “foobar equality failed” with no indication of what the unexpected value was to help you puzzle out why it works on your machine but not the CI server.
I hear this often enough, but it’s just never been my experience; I guess I’m asserting at a relatively granular level compared to most people.
But it’s moot, I think, because if you need that specificity, Assertf lets you provide it just fine by way of the format string.
I think the distinction is we are all likely writing different types of tests, that trade off different things.
In tests that I write asserting on simple values, sure simple ifs get the job done for me.
In tests of JSON outputs, or large structures, I find it more helpful to test equality of the entire thing at once and get a diff. It’s faster to review, and I get greater context, and the test will break if things change in the value I’m not testing.
I find a lot of value in tests that operate at the top level of an application. Like tests that test the stdin and stderr/stdout of a CLI, or tests that test the raw request and response to an API. They catch more bugs and force me to think about the product from the perspective of the system interacting with it. I don’t think this is the only thing to test for though or only way to test.
I know I find value in testify, it isn’t perfect like any code, but I dont think there’s a perfect practice about whether to use testify or not. It depends what you’re optimizing for and the type of assertions you’re making and inspecting.
But it’s moot, I think, because if you need that specificity, Assertf lets you provide it just fine by way of the format string.
It’s not moot, because usually by the time you realize you need it, you’re already looking at the failing test in CI. So now you need to roundtrip a patch to make your test more verbose.
I don’t see much value in rich assertion failure messages, most of the time.
Writing tests is part of my daily flow of programming, and so are failing tests. Not having to spend a bunch of time printf-ing values is a literal time saver.
I’ve spent more years using plain go test than testify. We switched to testify at work a few years back and it paid for itself after a couple days.
And I love how the goalposts have shifted here subtly. At first it was, “don’t reuse code that you could just write yourself in an afternoon.” But now it’s, “oh okay, so you can’t write it in an afternoon, but only because you value things that I don’t.” Like, have all the opinions you want, but “failure on test.go:123 is often totally sufficient” is just empirically wrong for me.
Before testify, writing tests was a huge pain in the ass. And if it wasn’t a pain in the ass, it was a pain in the ass to read the error messages because the test didn’t print enough detailed information.
Case in point, we’d have things like if !reflect.DeepEqual(x, y) { ... }, and when that failed, we’d be like, “oh what changed.” If x and y are big nested types, then printing out those values using the standard formatting specifiers is not that helpful. And I view the fact that needing reflect.DeepEqual (or go-cmp) in tests as a shortcoming in the language. There’s a convention for defining Equal methods which go-cmp reuses thankfully, but no other part of the language really recognizes the reality that, hey, maybe types want to define their own equality semantics independent of what Go does for you by default. And thus, Equal is not composable unless you go out of your way to recursively define it. Which, by the way, is an immediate footgun because it’s easy to forget to update that method when a new field is added. And it’s hard to write a good test for that.
And don’t get me started on other shitty things. Like comparing values with time.Time in them somewhere. Or doing other things like, say, asserting that two slices have equivalent elements but not necessarily the same order. Oops. Gotta monomorphize that second one manually for each collection type you call for it. Or I could just use ElementsMatch and not think about it again.
These are all problems that have come up for us in practice that have cost us time. Your “unpopular opinion” is crap in my experience.
Repetition is one of the claims that testify users are bringing. Irony is also often present, I believe to provide a bit more confidence.
I think a preference to import as much external code or first thinking about any single problem and consider solving it without external code speaks well about what kind of developer you are. I do not find it productive to argue which approach is superior, because it often feels like beating a dead horse. I hope the right answer comes with experience.
I have no idea what WithinDuration does, so I had to check this. Isn’t this function solving a very specific problem? Using this logic, I could claim that testify is garbage because it does not provide a function to check if a date is B.C. and I must write the assertion manually.
It is easy to argue about abstract problems. Even easier if badly explained and with no context. Please notice that the blog post is very specific with examples and numbers.
The primitive test library was one of the things I really disliked about Go from the start. (I spent about two years working in Go, 2013-2015.)
The arguments in this post against using an assertion library seem to come down to “it takes effort to choose one” and “but people might choose incompatible ones” (both of which are fallout from Go not including one in the first place), “but you can write tests without one” (which is a form of the Appeal To Turing-Completeness fallacy), and “after a few months you won’t miss assertions” (which is Stockholm syndrome, and was false in my case.)
[Comment removed by author]
I’m not a Go programmer, but yikes. This seems to overfocus on the “not that much worse” character or line cost of doing it “the plain go test” way, but ignore that approaches that involve churning out mass amounts of boilerplate if err != nil then print “got wrong Foobar” is, at best, a gigantic waste of everyone’s time, and is more likely a rich source of inconsistent error formats (making wrapping automated tooling around your tests unnecessarily difficult) & copy-paste errors as someone gets tired of typing the same thing over and over and pastes print “got wrong Foobar” under the BazBang variable’s test, instead.
The assert DSL library at least appears to assure that the error messages are consistently formatted and match what was tested – which is the absolute bare minimum I’d want out of even the most primitive of testing libraries.
Which isn’t to say that testify has a good API – I’ve never used it. But anytime you’re paying people to churn out boilerplate instead of letting the computer do it for you, you’ve significantly misunderstood what computers are actually good for.
Which brings me back to:
Given that the argument order of assert.Equals et al is documented and the “plain Go” way appears to be extremely rote boilerplate, if you decide you don’t like the library transforming the former into the latter via a small script seems…extremely uncostly and straightforward? There’s no reason to be doing this by hand.
Working in a large Go codebase with thousands of tests written using
testify
, I have absolutely no desire to switch everything tot.Error()
to avoid whatever the pitfalls oftestify
that the author seems horrified by. The tests run extremely fast with single-line asserts without my having to implement every singleeq
type. Consider:With perhaps half a dozen or more fields in some gRPC responses. This could easily double or more in size without any discernible advantage in performance or clarity, and with oodles of boilerplate.
Some of the matchers feel a little too rspec-y for me (
assert.NotZero()
for example) but generally it provides a very nice way to avoid writing a LOT of code. It’s also nice to be able to differentiate betweenassert
andrequire
, so that if you break a test in a way that doesn’t segfault, you can see all theassert
failures in a single run.With respect to type safety, I agree that the type conversion approach is suboptimal, but this is plainly a problem with Go’s lack of generics (imo). I would love the type system to allow me to say
assert.Equals(t, this, that)
and fail to compile if the types did not match; I do not have any interest whatsoever in something likemyCustomEqStrings(this, that)
andmyCustomEqInt64(this, that)
and so on ad infinitum in a production environment in which I am actually expected to deliver working software from time to time.If you are happy with
testify
but looking for a decent way to mock/fake calls made to your dependencies, we have found counterfeiter to be worth the while. You can declare an interface for your dependencies (good practice anyway) and then generate fakes against it to simulate responses of various types. Very useful if you work with microservices.The reason I started using testify is because the error messages were better than the built in messages, and I didn’t like repeating the if statement condition in every error message. I’m not sure if this is still the case though.
One thing I don’t like about the testify lib is the lack of consistency on parameter order (actual and expected.)
bothers me a lot
Isn’t that one correct? Collection is the “actual” and length is the “expected”.
I don’t know if “correct” is really the appropriate word to use here, but no, it is inconsistent with most other methods. For example: https://pkg.go.dev/github.com/stretchr/testify@v1.7.0/require#Equal
Oh haha my bad. I misread the parent comment as claiming “actual, expected” is the prominent order but it’s indeed the reverse.
I like using libs like testify for the same reason, when a test fails, the output is helpful. Multiline strings are diffed. JSON inconsistencies are highlighted. Nested values with subtle differences are compared. It’s those features that make a huge difference.
I think testifys has evolved over time in ways it shouldn’t, like inconsistent arguments and functions that bloat the API, but it’s still great imo.
Out of my own desire to explore ideas I’ve been building my own assertion library, inspired by testify, minimalistic, but useful for the apps I build, https://4d63.com/test. I don’t expect to build something better, but to understand the tradeoffs, decisioning process, and how this stuff works.
I never interpreted the standard library testing package as having the intention that it is supposed to be the entirety of your testing apparatus, but only that it is supposed to be the entry point to your testing apparatus, so that there was a universal standard for how you would run the tests in a Go program. I do all my testing in a manner that would allow people to use the standard
go test
invocations likego test -run
to select specific tests orgo test -count
to run a given test many times, but internally there’s, you know, more tooling and library support to make the ergonomics of writing tests better. It’s seems super odd and weirdly dogmatic to say “you should not use any assertions library at all”.(moved from a response to a root comment because it’s not really a response to the comment so much as it is a response so the article)
Use https://pkg.go.dev/gotest.tools/v3/assert instead of testify. Small API surface area and uses go-cmp under the hood for diffing.
I just wanted to thank you for pointing out this lib.
For some reason I hadn’t run across it yet.
I easily swapped out testify with this using the auto-migration tool on a couple of my projects. Worked great!
My experience is that pure Go tests cover 99.9% of cases. Testify feels like an inability to adapt to a new language.
My experience is that pure Go tests are obnoxious to write and debug. You have to tediously format debugging information yourself every time instead of having assert functions do that for you. That in turn makes failing tests harder to debug when the person before you didn’t bother doing it, or did so inconsistently.
Worse, I’ve noticed that Go tests tend to assert the final results, but little in between. My tests usually assert every step of the way so I don’t have to spend time figuring out what went wrong. Perhaps if Go tests had a convenient assert, people would assert more.
Asserts aren’t just for tests either. I use debug asserts all over my code. Asserts catch bad assumptions early, document those assumptions, and maintain their own freshness (they’ll fail when they’re wrong, unlike comments).
Interesting, I do test intermediate state, but mostly errors.
Formatting never was an issue for me. Sometimes comparing complex structs may be tedious. If
reflect.DeepEqual
does not help, than I usegithub.com/google/go-cmp
and that is it.Other than that I do not have any issues with the testing package.
When a test is failing, do you ever printf debug? At some point I noticed that the majority of the time I add prints (or check values in a debugger), I’m looking for something specific that indicates the value is right or wrong. Asserts work perfectly for that, in tests or normal code.
Most of the time failed test gives me enough information about the underlying issue, maybe because the scope of unit tests is somewhat limited.
If a test fails and I do not know the answer, I instrument my production code with a bunch of
fmt.Printf
statements, and it helps to find the issue.I prefer limited output when it comes to debugging rather than overly verbose.
I see your point. I would imagine we have different coding styles.
I use https://github.com/google/go-cmp and the standard library testing package. I like the combo.
Hidden in the depths of this article is this statement, which I think is important, but the rest of the content does not really reflect this more moderate view.
And the argument about introducing dependencies with very large API surfaces, like GoConvey, is also something to consider. But Testify hits the sweet spot in my opinion - the utility and consistency it provides is high while the API surface is fairly small. Sure, it’s not perfect, but then neither are the counter examples in this article.
These people are not wrong. A cult is a derived sense of “excessive devotion”, and this is exactly that. I’ve written a lot of Go and I love many things about the Go community, but I think a touch more pragmatism and a touch less slavish adherence to Go aphorisms would go a long way to making the language more attractive to more people.
More concretely, the initial Testify example is completely non-idiomatic and thus the argument about line length and increased cognitive overhead is based on a false premise. In reality idiomatic Testify would look something like the below, which is extremely concise, easy to understand, and the same pattern can be repeated for almost any tests without having to write a bespoke 30-line comparison function for each comparison. Additionally it gives you very useful error output in the case where it fails, including a diff of the expected and actual structures, unlike the proposed alternative.
This makes zero sense. The argument that learning testify/assert once is somehow worse than comprehending potentially hundreds of bespoke assertion functions is nuts to me. Additionally, each bespoke assertion function has the potential to introduce its own subtle bugs, vs. Testify’s millions of usages shaking bugs out.
Also the statement that the following both pass is just completely false.
The final solution also doesn’t provide as much utility as Testify because the following snippet doesn’t tell you what the expected value is! Ironically, this is almost a perfect argument for why Testify is a strict improvement over ad-hoc functions like these.
This blog post is absolutely great, and I use it as a starting point for a discussion about why not to use a custom testing interface (most of the time
testify
).I feel very strong about
testify
. Negatively. Its use is a red light for me and a sign that the author might not think but rather force known from somewhere else solutions.This is a pretty strong statement regarding the usage of a test helper libary.
But if you think someone writing their own
WithinDuration
test helper method, for each library they write, is a good use of their time, and is somehow a signal for overall project quality (or of somehow not thinking?!), then I guess more power to you.“A little copying is better than a little dependency”
The cost of writing a set of Assert helpers for each project you own/maintain is zero.
Sure. If time is worth nothing, then writing a set of helpers for every project is fine.
Oh, but then maybe you could share it across projects to save a bit of time/effort for any new project you start.
Maybe even open source it? Other people might even be interested in using it!
Oh wait.. Now we are back to square zero with it being bad?
That said, if you only need one (or a few, or several even) function, then sure. I agree that copying it around is better than adding a dependency. But if you ever reach a point where you have to update more than one project to add or fix a helper, then you are probably better off making it a dependency.
But use of a test helper library as somehow being a red light for overall project quality, sure seems dubious to me.
The time it takes me to write those helpers is, without exaggeration, less than the time I spend waiting for VS Code to do whatever action in 1 day. It doesn’t enter into the cost accounting. The cost of a dependency, on the other hand, is real, and significant, and perpetual.
I once heard a good rule of thumb: never import anything you could write in an afternoon. Assert is well below that threshold.
The only reason to add a helper to a project is if you need it; the only reason to update a helper in a project is if it’s causing problems in that project. There’s no situation I can think of where you have a bunch of similar/identical helpers in a bunch of projects you own/maintain, and you need to update them all.
I’ve been using Go since before 1.0 was released. I have a lot of experience using the reflect package. I’m pretty sure I couldn’t write a good set of assert helpers in an afternoon.
The funny thing here is that nobody seems to acknowledge that the assert helpers aren’t just about deleting some if statements. It’s also about the messages you get when a test fails. A good assert helper will print a nice diff for you between expected and actual values.
testify is pretty dang close to what I would write. And while some dependencies have a perpetual cost, I’ve not experienced that with testify specifically.
I usually like the “Go Way” of doing things, but this particular position is pretty Out There IMO.
I don’t see much value in rich assertion failure messages, most of the time. Literally this and nothing more is totally sufficient for 80% of projects.
You’re going to have a hell of a time debugging that on CI when all you have is “foobar equality failed” with no indication of what the unexpected value was to help you puzzle out why it works on your machine but not the CI server.
I mean, more power to you but I’m not out to make my job any harder than it has to be. “expected: “test string” received: “TODO set this value before pushing test config”” is too easy a win for me to ignore, and god help you when the strings are piles of JSON instead. Then you’re really going to want CI to give you that diff.
I hear this often enough, but it’s just never been my experience; I guess I’m asserting at a relatively granular level compared to most people.
But it’s moot, I think, because if you need that specificity, Assertf lets you provide it just fine by way of the format string.
I think the distinction is we are all likely writing different types of tests, that trade off different things.
In tests that I write asserting on simple values, sure simple ifs get the job done for me.
In tests of JSON outputs, or large structures, I find it more helpful to test equality of the entire thing at once and get a diff. It’s faster to review, and I get greater context, and the test will break if things change in the value I’m not testing.
I find a lot of value in tests that operate at the top level of an application. Like tests that test the stdin and stderr/stdout of a CLI, or tests that test the raw request and response to an API. They catch more bugs and force me to think about the product from the perspective of the system interacting with it. I don’t think this is the only thing to test for though or only way to test.
I know I find value in testify, it isn’t perfect like any code, but I dont think there’s a perfect practice about whether to use testify or not. It depends what you’re optimizing for and the type of assertions you’re making and inspecting.
It’s not moot, because usually by the time you realize you need it, you’re already looking at the failing test in CI. So now you need to roundtrip a patch to make your test more verbose.
Writing tests is part of my daily flow of programming, and so are failing tests. Not having to spend a bunch of time printf-ing values is a literal time saver.
I’ve spent more years using plain
go test
than testify. We switched to testify at work a few years back and it paid for itself after a couple days.And I love how the goalposts have shifted here subtly. At first it was, “don’t reuse code that you could just write yourself in an afternoon.” But now it’s, “oh okay, so you can’t write it in an afternoon, but only because you value things that I don’t.” Like, have all the opinions you want, but “failure on test.go:123 is often totally sufficient” is just empirically wrong for me.
Before testify, writing tests was a huge pain in the ass. And if it wasn’t a pain in the ass, it was a pain in the ass to read the error messages because the test didn’t print enough detailed information.
Case in point, we’d have things like
if !reflect.DeepEqual(x, y) { ... }
, and when that failed, we’d be like, “oh what changed.” Ifx
andy
are big nested types, then printing out those values using the standard formatting specifiers is not that helpful. And I view the fact that needingreflect.DeepEqual
(orgo-cmp
) in tests as a shortcoming in the language. There’s a convention for definingEqual
methods whichgo-cmp
reuses thankfully, but no other part of the language really recognizes the reality that, hey, maybe types want to define their own equality semantics independent of what Go does for you by default. And thus,Equal
is not composable unless you go out of your way to recursively define it. Which, by the way, is an immediate footgun because it’s easy to forget to update that method when a new field is added. And it’s hard to write a good test for that.And don’t get me started on other shitty things. Like comparing values with
time.Time
in them somewhere. Or doing other things like, say, asserting that two slices have equivalent elements but not necessarily the same order. Oops. Gotta monomorphize that second one manually for each collection type you call for it. Or I could just useElementsMatch
and not think about it again.These are all problems that have come up for us in practice that have cost us time. Your “unpopular opinion” is crap in my experience.
That’s totally fine! This isn’t a competition, we’re just sharing experiences. I think?
This honestly made me feel bad; I’m sorry to have put you off.
I’m sorry too. Your comments in this thread came off as pretty dismissive to me and I probably got too defensive.
Repetition is one of the claims that testify users are bringing. Irony is also often present, I believe to provide a bit more confidence.
I think a preference to import as much external code or first thinking about any single problem and consider solving it without external code speaks well about what kind of developer you are. I do not find it productive to argue which approach is superior, because it often feels like beating a dead horse. I hope the right answer comes with experience.
I have no idea what
WithinDuration
does, so I had to check this. Isn’t this function solving a very specific problem? Using this logic, I could claim thattestify
is garbage because it does not provide a function to check if a date is B.C. and I must write the assertion manually.It is easy to argue about abstract problems. Even easier if badly explained and with no context. Please notice that the blog post is very specific with examples and numbers.