We have some tests using this technique at work. Guess what happened? No one ever set the variable on the CI, or removed it and forgot to re-add, and for months we simply did not run any integration tests. In fact, we’re still not doing it. Because the tests are skipped, not failed, CI was green and no one noticed, until Mr. Nosy McNosyFace (a.k.a., me) went poking around in the actually output of the tests, because I looked at the code and it didn’t make sense how the tests could possibly be passing.
Not saying it’s a bad technique, I think it’s good. But unfortunately there’s no amount of technical solutions that will save you from chaotic processes.
Perhaps one way to mitigate this is to check for specific values for activating or deactivating tests. Like, "DON'T RUN" / "RUN". And the if the variable is not set, raise an error. Developers can just set the environment variable to "DON'T RUN" in their profile, or a .env file, but of it’s just straight out missing from the CI environment, you’ll still catch it.
Alternatively, you could invert the logic and require setting an environment variable to deactivate certain tests, but they might be an unnecessary extra annoyance for developers.
When I did this, I tried to protect against exactly this failure mode by having one more test which asserts that, if the CI variable is set (which is set automatically by most CI systems), slow tests actually did run (when running, slow test would touch target/.run-slow-tests file.
I think the main problem here is Integration Tests are bad UX and thus, create the need to skip them.
Instead of spending effort on how to skip the test better / easier, why not just make the Integration Tests less painful to run?
These are the reason why I enjoy working in Bazel space so much: the constraints the tool created force people to face these human problems directly. It slows down how people work initially, but once they get over the initial bump, things get a lot better overtime.