Just released first game on Steam two days ago (https://store.steampowered.com/app/1473870/Hell_Loop/), so mostly monitoring what’s happening and fixing issues as they arise, along with post-release marketing :)
Busy Christmas this year!
Congrats! It looks intriguing. I always loved Lemmings. This looks like quite a “twist” on that concept.
I used local port forwarding to access the screen of remote Macs:
ssh -L 5901:localhost:5900 $remote_mac
Then open “Screen Sharing.app” and connect to 127.0.0.1:5901.
Nice! It can be really handy with tools which are just painful to configure for direct connections. There were so many times where I wanted to connect to a postgres on another machine at home, but was too lazy to edit
pg_hba.conf, so I just made a tunnel instead hehe. I guess it’s better for security reasons anyway.
In case someone finds this via search, here’s a followup thread https://lobste.rs/s/ptucvb/let_s_write_2d_platformer_from_scratch to the next post
I would use fixed time step for simple games. Making everything depend on
dt may be more complicated for beginners, IMHO.
My issue with fixed time step is that you either multiply everything with a constant anyway (at which point
dt isn’t any more difficult to use), or you specify velocities in unnatural units, such as pixels per frame (which is super unintuitive to think about). Or am I missing something?
dt will always be 1000/60. With variable (and unconstrained)
dt, every game logic code part should be able to handle any values of it. For example colision detection should be able to handle large
dt caused by long frame skips: you no longer can just
pos += dt, then check if entity collides with something — entities may pass through walls, etc.
One of options is to limit
dt to some maximum value instead of making it fixed.
Lots of large AAA games are ok with variable timestep, however. AFAIK, Quake uses it. Seems that bug with impossible mission in GTA 4 on fast PCs is related to this. Quake III, however, has only rounding issues due to variable time step.
The next post will be using box colliders with raycasts.
In this case variable time step will not make things more complicated, I think.
(I’m not an expert in games and I never written even finished platformer, just tried to do that).
Admirable effort, but punts badly on collision detection (in part 2 if one follows the link). One really needs at least some basic physics engine in even the simplest platformer. Hopefully that’ll be in a future post (box2d?).
Author of the article here. The future post won’t use a physics engine. Physics engines are bad for 2d platformers which aren’t necessarily physics based. I mean if the goal is to make a physics based game (think Angry Birds) then sure, but if the goal is snappy controls like Super Mario, then you’ll fight the engine more than it helps imho. Sliding platforms, elevators and similar things are quite the pain for 2d physics engines. Can’t speak for 3d though.
The next post will be using box colliders with raycasts. Once you have a 2d raycast (might even be enough to just have horizontal/vertical raycasts) you can do even stuff like slopes fairly easily, but the 3rd part most likely won’t get into that. I’ll probably cut it off when gravity/jumping works with static platforms.
Any tips are welcome though.
edit: Just to react to the comment :P
but punts badly on collision detection (in part 2 if one follows the link)
You’re right, I’m not really that happy with the state where its at. Initially I thought I’d make it a one big article, but keeping all the code in sync ended up being a nightmare, which is why I decided to split it up, cut it off at a point where something works, and do the next part in a more conscise manner.
Regarding keeping the code organised, for a multi-part article, why not create a git or mercurial repository - on github, bitbucket or gitlab for example. The code for each article can be in a separate branch which you can link to directly. You’d still need to manage changes made to an earlier stage but that’s pretty straightforward.
That’s not a bad idea, I thought about having a Gist of the finished code in each article. But the issue I was hinting at is with code snippets within a single article, not spanning multiple ones.
My approach to these articles is to have incremental samples with JSFiddle along the way, but all of those are separate snippets, and if I decide to change one thing I have to re-write a lot of the article. I mean the solution to this is easy, not to have 10 copies of the code in each post, but I feel like that’s just making it more difficult to follow along. I guess I should probably figure out the whole code first before I start writing to minimize the changes.
The next post will be using box colliders with raycasts. Once you have a 2d raycast (might even be enough to just have horizontal/vertical raycasts) you can do even stuff like slopes fairly easily, but the 3rd part most likely won’t get into that. I’ll probably cut it off when gravity/jumping works with static platforms.
This sounds great, I’d love to read that. It would be valuable addition to material already out there.
Am I the only one around here who doesn’t mind whiteboard interviews? Ultimately you’re just discussing a technical topic with someone and drawing a few boxes and arrows is really useful.
The last time I did a whiteboard interview I didn’t 100% nail the CS puzzler question and, given the offer I got, the interviewer really was mostly interested in my thought process and not my ability to hand indent python code on a whiteboard. I’ve had this experience more than once. Written communication is a skill and being able to communicate your thought process to someone else isn’t as artificial an environment as some make it out to be.
I’m a broken record about how bad whiteboard interviews are, but I think I generally do pretty well on them. After ~15 years of consulting and product management, I’m pretty comfortable in a neutral-to-hostile room. I think I can talk my way through most situations.
But that’s one of the things that scares me about whiteboard interviews. Not that they’re insurmountable hurdles for my own career, but that they’re too easy, and that I get undue positive evaluations just from the ability to remain confident-sounding during them, and, more importantly, by being able to redirect questions and reframe interviews.
I’ve worked with too many people who interviewed well but were almost total zeroes when it came to delivery too put any faith in ad-hoc interview processes.
I came here to say this. I’d much rather go to a whiteboard interview. I don’t do riddles or competitive programming, but I’d appreciate being tested a little more than “tells us about your last project”.
Seeing DHH’s tweet makes me cringe. Sure it works for him, and it works for a lot of people, but for me it is important to know some CS basics. I wouldn’t mind if he said quicksort, or something a little more complicated like that … but bragging that you can’t write the simplest algorithm there is, how is that a good thing?
However, I do agree that take home excercises are good, and I enjoy those too. But they’re not as representative if you don’t ask the candidate to do easy something live.
I think it’s not necessarily a question about substance but more about style. Not everyone’s coding style is amenable to standing in front of people and hand-coding an algorithm while people are sitting there judging every character you write. You’re doing this while you’re expected to talk through what you’re writing on the board. That doesn’t come naturally to some people – when you’re sitting at a computer implementing an algorithm, you’re not talking about it out loud.
Another aspect of this is that a candidate can do better by studying for an interview. That’s a sign of a broken candidate vetting process. A candidate with years of experience and a top performer at their previous job could possibly be rejected based on questions that they may not have seen in years.
CS basics are important but some of that knowledge fades over time. I couldn’t give you a good definition of polymorphism without looking it up but I know what it means. I think the issue comes down to treating every candidate as though they were fresh out of school and the further they are from that, the more likely they are to fail those “basic” CS questions in this type of interview environment.
EDIT: better explanation than mine: https://medium.com/make-better-software/against-the-whiteboard-f1df0013954f#.hx2sgjnrl
DHH has written some valuable software, but the world is bigger than rails. So when I hear that in the early days of rails things leaked memory so bad that app servers had to be restarted every few minutes I think that’s bad. I realize for the type of work his websites were used for that wasn’t a critical defect. In most software I write that would be a critical defect.
So when I hear that in the early days of rails things leaked memory so bad that app servers had to be restarted every few minutes I think that’s bad.
There was a popular (now depublished) blog post by Zed Shaw called “Rails is a ghetto” where he was railing against a lot of things he perceived as wrong in the Rails community. One of them was that he wrote a critical piece of software, but no one would pay him for that. It was one of the first specialized HTTP adapters in Ruby, called Mongrel. Along with that, he wrote a critical gem that fixed MRIs heavy threading problem back then.
In that post, he mentioned in passing that DHH told him that their initial web stack was so bad that they had to restart it every ~100 requests because it leaked memory and Mongrel improved things a lot.
Now, having programmed Ruby since version 1.6, I think this is probably true. Ruby back then was a fringe language built by a few people that were good language designers, but not necessarily runtime implementors. Also, the runtime was built for scripting workloads, so threading, servers and similar were a not so well tested case. With it becoming popular, thing improved massively and from 1.8.7 and later, I’d call MRI a runtime on par with Python and others. 1.9 finally made it somewhat modern.
But as /u/shanemhansen sais: that wasn’t too critical these kinds of applications. For example, it was standard that PHP websites would follow a CGI-like “one process per connection”-approach to make sure there’s no memory problems due to leaks. Restarting a set of webservers every nth request is also acceptable in such an environment, even if there is always another around.
I think the core of DHH’s argument is rather that he’d like people that understand those subleties and possibilities and you can’t really test that on a whiteboard.
The relevant bit from Rails is a Ghetto:
I believe, if I could point at one thing it’s the following statement on 2007-01-20 to me by David H. creator of Rails:
(15:11:12) DHH: before fastthread we had ~400 restarts/day
(15:11:22) DHH: now we have perhaps 10
(15:11:29) Zed S.: oh nice
(15:11:33) Zed S.: and that’s still fastcgi right?
Notice how it took me a few seconds to reply. This one single statement basically means that we all got duped. The main Rails application that DHH created required restarting ~400 times/day. That’s a production application that can’t stay up for more than 4 minutes on average.
I think people are too concerned about the whiteboard itself. It isn’t about using a whiteboard or not, it’s what you do with it.
Quizzing candidates about CS problems that have nothing to do with the day-to-day work and penalizing them if they don’t get it perfect on the first try is lame, whether or not you use a whiteboard for it.
On the other hand, getting candidates to write some kind of code somewhere that does something useful and discussing their thought process on techniques, tradeoffs, etc is a good idea, and I’d be worried about working somewhere that didn’t do that. A whiteboard can be a nice way to do that, but there’s lots of other ways, including paper, shared text documents, take-home projects, etc.
Now I know what I want for my birthday … ASCII printed in 4 columns on a large poster to hang over my monitor. This shit is art.
I do actually enjoy to write quite a bit of code before I hit compile. It depends on the problem at hand, and for some things surely it makes sense to have a faster feedback loop.
But having just spent half a year working on a game every day, there are cases when I write a bunch of complicated code for even an hour, without getting anywhere near to running it and testing. Some code has to be written in larger chunks.
A lot of times I don’t need to run my code to know that it is correct. Surely I could make a mistake, but I try to program somewhat defensively, throwing asserts whenever there’s a precondition that isn’t completely trivial. People frown upon asserts these days, saying that TDD is the holy grail … but what if the code I need to write is just a 300 lines of an algorithm that can’t simply be split into tiny 3 line methods that are red-green-refactored? What if the problem being solved is more conscise and readable as a 300 line bunch of switch/if statements and bare for loops with indices? Do I really need to run it every 5 seconds to stay on track?
Say that you’re writing a basic pathfinder, starting from scratch. First you probably want to have some representation of the graph, so you pick one and code it. There’s a clear idea of what the data structure looks like, what the invariants should be, and you just write it. Then when you’re writing the traversal algorithm, you just write down the algorithm with all the invariants as asserts. Surely it’d be nice to have some tests along the way to verify that the code is correct, but those don’t have to be run when writing those 50 lines.
One might suggest you’d want tests early if you don’t know if the code is really solving the problem you’re trying to solve. I’m not going to argue with that, but there’s great value in looking at a piece of code and thinking about it, instead of just running it and seeing what happens.
In my opinion, this is what it comes down to. Programmers these days like to run code instead of just looking at it and creating a mental model of what it does. Surely your mental model can be wrong, and you should definitely run and test the code, but you should first be fairly confident in what it will do. There’s nothing wrong with writing 100 lines in one go, reading them over and thinking to yourself “yes, this is definitely correct, I don’t need to run it for now”.
What if the code has a bug in it? People immediately reach for a debugger and try to step through the code to find the error, instead of just reading the code without executing it first.
This all breaks down especially in cases when you can’t actually run/test the code and you’re left with just your mind, trying to read the code and make sense of where the error might be. The debugging code by reading it line by line might be the only thing you have left in a lot of cases, and I’d much rather prepare myself to solve the difficult problems, than to optimize for the easy ones.
Is it more valuable to have a super fast feedback cycle when writing trivial lines of code, or to train yourself in keeping a strong mental model, so that you can use it when the time comes and you can’t use your fancy tools anymore?
The fact that you occasionally spend more time thinking/coding than running/debugging doesn’t negate the need for fast compiles and, ideally, hot code swapping of some kind.
I’ve written games with pathfinding and complex fiddly bits of logic and such before. In my experience, even if your code is 100% correct on the first try, games need lots of rendering hooks in to subsystems for visualizing what’s happening. Maybe the AI does something surprising and you want to understand why it does it. Or maybe performance isn’t great and you want to watch the navmesh while you tweak the assets to minimize the work the engine is doing. Being able to change a color or a formula or something, then Edit & Continue is an incredible time saver that leads to better outcomes.
I’m sympathetic to this argument, but have historically struggled to develop this skill deliberately, not for lack of trying. Any tips for working towards mastering this skill, outside of just doing it in your day-to-day? In particular, I find the right modality for building my mental model of the code typically falls somewhere between verbal and visual, but I’m not sure whether this is habit or optimal.
I was excited to see this post so I look forward to hearing any tips or further reading you’d recommend on this topic.
Generally you’d want to have a clear idea of what you want to write before you write it. Almost to the point when you could just tell a robot the instructions and he could do it for you. The mental model doesn’t have to be 1:1 copy of the code though, it may be more abstract, but you should be able to extrapolate the code from it.
If you’re having trouble imagining how an existing codebase works, then that’s a problem in and of itself, and making changes to it while instantly checking if they’re correct is bound to introduce bugs at some point.
Say that you want to make a change in a piece of code someone else wrote. You’re not exactly sure what it will do, so you change it, run the tests and see what happens. If it does the right thing, you might convince yourself that the change is correct and move on … but it might also be the case that the tests just don’t cover a new type of bug you’ve introduced.
The problem was a bit earlier when you start thinking “I’m not exactly sure what this does”. Don’t make changes to code you don’t have a clear mental model of (unless in hurry hehe). Build the mental model first, read through the code and try to understand how it works. You should be able to answer questions like “are there any invariants that I might be in danger of breaking?”.
It takes a lot of patience to do this, especially the first time around a particular type of code it might take a while before you get used to thinking about it. There are no magic bullets really, you just have to get familiar with the type of program you’re working with.
One thing that I find very helpful at times is to read the code outside of an editor. If it’s short enough, you can print it out and read it on a paper while scribbling on it with a pencil. Or you can just use a phone/tablet, or just another computer that doesn’t have an editor and read it in a web browser or something.
Not being able to make changes will force you to focus on what is important.
Maybe now they’ll stop messing around with new fancy features, and actually fix their shitload of bugs that have been plaguing Xamarin for the past X years.
Well there was the ‘little’ problem of monetization before - not trivial when you’re talking about something as foundational as a set of APIs. This should help.
I don’t really mind their pricing. Sure it’s not cheap, but what bothers me much more is how broken it is.
I really dislike this article to be honest. Here are a few specifics
IDEs are horrible, yes. I agree on that point. But they’re getting better.
Headers might seem counterproductive, but you don’t write C++ the same way you write business apps in C#, where you just churn out massive amounts of code. Templates in header files can’t really be worked around by design. It’s not perfect, but there are ways around it (not saying they’re pretty). C# also has a “preprocessor” with #ifdefs, and being used for low level things, you really do need to have conditional compilation at least in some places in C++. Also, I don’t understand why you’re bashing on namespaces, since C# uses them as well. You can import namespaces in C#, much like you can do so in C++, and you can do it at a local scope as well, which is something C# doesn’t allow you to do.
Compilers are difficult to implement, and having multiple vendors will lead to inconsistencies, especially in new features like C++14. Also the StackOverflow link you post says something completely different than what you say in your blog post. Portability in C# doesn’t really exist, unless you restrict yourself to the not-so-buggy subset of Mono and drop a few nice frameworks like WPF. If you’re doing platform specific things in C++, of course it’s going to be platform specific. That’s the whole point of the language, that you can touch the actual machine your software is running on. Also not to mention there are large number of cross-platform portable libraries like POCO, Qt, Boost, etc.
“It’s a counterproductive language” is very inaccurate. Most of the time I feel more productive in C++ than in C#, but it depends on the project. What you experienced is that you can’t program in C++ without learning the language first. I’m not saying it’s newbie friendly, or that it’s easy, but if taught the proper way it can be very accessible. I know of a lot of novice programmers who are quite happy programming in C++ (even having known C# and Java before), just because a lot of things are more elegant and direct in C++.
You bash on std::chrono, yet the reason why DateTime is so easy is because it’s put in the allmighty global System namespace. Also you can either have std::chrono autocompleted, or use
using std::chrono in a local scope and be done with it. The name of highresolutionclock is debatable, but since there are multiple clocks available (based on the type you actually need), I’d say it’s a fair decision to name it based on what it does.
Quoting “And there’s no reason why the C# version would be any slower than the C++ code, if you don’t mind the JIT work.”. Not in this particular case, but LINQ in general leads to some horribly inefficient code due to the way it works, and a lot of people just use it without thinking twice and produce such code. Sure, the LINQ version has one less parameter, and there are actually libraries that provide STL-like functionality on collections directly, instead of using iterators. I’m not arguing about that. Though the lambda syntax for C# is shorter because C++ by design absolutely has to have the capture list.
Lastly, you mention that shared/uniqueptr makes code hard to read. It does make the code a little bit more verbose, but it does help readability in the sense that you have a much better idea of what’s going on. If I see a uniqueptr somewhere, I instantly know a lot more than if there was just a *, or in case of C# nothing.
I don’t want to sound as if I’m fanatically defending C++, since it’s a language of many flaws. But the points you mentioned seem wrong, and if I’m to guess, caused by the fact that you started programming C++ before you had a deeper understanding of the language, which can be a source of frustration, yes. It’s not easy to learn C++, and it does take quite a bit of time to get good using the language. On the other hand, there aren’t that many contestants, and C# definitely isn’t one of them in the areas where C++ should be/is used and/or required.
That’s only kind of true. CS is an applied, algorithmic, iterative branch of mathematics. It has its own valuable contributions and unique approach that is semi-independant of mathematics.
Not necessarily true, most of my CS classes are completely without a computer, even the labs. I’d argue that it’s even more effective (if you have some basic experience with programming) than sitting at a computer.
For example, analyzing algorithms in assembly which you had to hand-code on a blackboard was one of my favorite things.
Nice timing. I’m on about year 4 (or is it 5?) of Arch + xmonad. I’ve always used Thinkpads, so there haven’t been any hardware support issues, but I do miss the OS X + Macbook battery life and it seems my wifi is always more spotty than coworkers/family using the same networks on a Mac.
I was considering switching back to a Mac recently, hoping that one of the tiling “WMs” on OS X was decent these days. Are they all bad? I assume I’d just ran an Arch VM for actual development needs and use OS X as a “skin with good battery life” + iTerm (to the VM). Am I crazy?
I was considering switching back to a Mac recently, hoping that one of the tiling “WMs” on OS X was decent these days. Are they all bad?
Yeah, I just tried Yosemite on my Macbook Pro before reinstalling Ubuntu+xmonad. They’re all bad. Mostly don’t even really work properly.
Is the battery life that bad? I guess I can’t compare since I only ever used a 15" MBP, but the batter life on that one is comparable to the one in my 13" Thinkpad.
Though using a VM on OS X might eat more battery than if you just used Linux on the metal.
None of the alternate window management add-ons on OS X are really worth a damn. They really can’t be, because they always end up fighting the platform. This doesn’t bother me, but if you’re looking for something like ion but able to control native windows, you’re going to be disappointed. I use Optimal Layout, which allows me to easily resize windows, but it’s a far cry from when I used to use FreeBSD and X-Windows.
Phantom types are really cool but I feel the ruby example is misleading. I think that something like:
Message = Struct.new(:text) do def ciphertext @ciphertext ||= # encrypt plain text logic end end def send_message(message) # send using message.ciphertext end
Would be a big improvement.
This really depends. You might not want to have the encryption logic in the message itself. You’re tying your data to the way you manipulate it, which works in this case, since there’s only two
states for the message.
But imagine something like strong parameters in Rails if you’re familiar with them. If you do something like
User.new(params[:user]) these days, you’ll get
UnpermittedParametersException, since you need to permit them first.
But the process of permitting depends on what you want to do, since you rarely want to permit them all, which would be the case of something like
User.new(params[:user].permit!). Rather than that, most Rails apps do something like this
In this case you’re bound to only pass in permitted parameters to the model, but you can’t automate it or force it by construction as you did in your example. However using phantom types it would be really easy to do, since there would just be
Params Permitted and
I’m not sure I follow. Parameter permissions is an integration problem. On reception from the outside world of incoming data from an arbitrary sender, one has to parse and/or validate that data at runtime whether the language is static or dynamic. I would not recommend passing around non-validated parameters in application code. Validate the incoming data then only provide validated data to the rest of the application.
Yes you are absolutely correct. The point I was trying to make is that if you try to pass unpermitted parameters to a function like
User.new, it should be a type error, not a runtime error (which can even end up hidden sometimes).
AFAIK the current solution in Rails will raise an exception on unpermitted params, which is fine, but phantom types give you an option to implement the same thing at compile time :)
A wonderfully clear explanation of phantom types, although it misses out on one of their most important features. Phantom types are particularly excellent because they carry no runtime penalty! The types exist only at compile time, and are then stripped out once the program has passed the type checker. So you get the additional safety of a type without the necessary overhead of storing the type when the program is run.
It’s weird, earlier this year I switched to emacs as my primary editor after about a decade of using mostly vi(m) (and set -o vi in the shell); however, I’ve never used evil mode. That’s probably because I didn’t know about it when I was switching, but I think going cold turkey might have helped (forced) me to learn emacs better. Anyone else do the switch and find otherwise?
When I switched to emacs I would get extremely frustrated by the drop in productivity and having to look-up-everything. It really is hard to learn something new that is a covering for something you are already proficient in.
Evil Mode is a great escape hatch for not having to launch vim. See once you launch vim you can stay in vim (also true for evil mode). Switching to Evil mode is a more unobtrusive context switch.
I would stay in emacs, attempt to solve the problem and if I couldn’t do it in under 30 seconds to a minute, I would switch to Evil. Solve it with vim keybindings and then make a note to myself to solve this problem in emacs while I wasn’t under pressure.
For me, Evil helps.
When I made the switch about a year ago, I decided not to use Evil mode, because I would rather take the productivity hit and learn the emacs methods than fall back on vim keybindings, a tool from which I was trying to migrate. Similar to when people switch to vi(m), everyone says to take the productivity hit and always use h, j, k, and l rather than the arrow keys, until you learn it well enough where there’s no productivity gap. Same thing with emacs.
I just had a legal pad of paper next to my desk, and would keep around ten functions/commands (for instance: find and replace was on my list for a while [M-%]) scratched down. Whenever I decided I was proficient with one command, I would cross it off the list and add a new one. It made it easy to learn the things I would use pretty consistently, and I ended up with a list of 10 various commands that were extremely useful, but didn’t use them enough to have them memorized. It turned into a pretty handy reference!
Hi! I’m the original author of the article. I guess the biggest obstacle for me wasn’t not knowing Emacs. I can use it without Evil Mode just fine, but with it I’m much much much faster. My style of work generally involves writing/moving the code around as I think, instead of just staring at the screen, and VIM is much faster at that (or VIM keybindings).
There is god-mode for Emacs, but I never really tried that, since Evil mode works perfectly for now :) It might be worth investigating though.
I cannot disagree hard enough with this article. I am surfing to relax from the stress of not five minutes ago finishing a 90m talk on this very topic to my company.
It sounds like bad unit tests. Looking at the tags on the blog I’d guess the author is talking about testing Rails, and I think the Active Record pattern (to say nothing of the AR lib) is part of this: it encourages your objects to all know about each other, maintain state, and use global variables (the db). When you’ve been doing this, you end up mocking inappropriately (so you don’t know when you’ve broken protocols) to deal with not being able to test things in isolation - or even guess if a message you send will hit the db. You end up scared of your tests because it’s unpredictable if they’ll tell you all your failures, or you’ll change something small and half of them will go red, or you’ll make a change and spend an unreasonable amount of time updating tests to match the new behavior. This is really, really common.
It doesn’t mean unit tests should not be trusted. It means your unit tests can be better, maybe because the environment you’re interacting with encouraged poor design.
Sandi Metz’s talk The Magic Tricks of Testing tells you what to test, and Rainsberger’s Integration Tests are a Scam gives you the protocol for writing tests.
I’m wrapping up a rewrite of a personal project away from the AR pattern and into tests as described by Metz and Rainsberger and immutable values + entities with identity + repositories as described by Eric Evans in Domain Driven Design and I’ve been surprised by how much easier my tests are to work with now: failures generally break only a few tests, and they’ll break a test that is very low-level so I don’t spend time digging down to find errors. Significant refactorings have not caused me to curse my tests.
Hey, I’m the author. While this blog post was mostly inspired by Rails, and I agree with most of your points, I’d say that this really depends on your definition of refactoring.
I was talking about a larger scale changes that span multiple classes, where your unit tests are inherently going to break, because you change the way the objects interact. At this point you have nothing left but your higher level tests to rely on, because all of the unit tests are failing.
Sandi Metz talks about things at a small scale. I’ve actually finished re-reading her book last week, and I have to say that it left me feeling very sad. If you have 5000 lines of messy tangled code that you need to refactor, there’s no way you can rely on the messy unit tests, because they will give you zero information about the whole system.
Yes integration tests are a scam, and they can’t test every corner of the app, but that’s not what I’m saying here. I’m talking about having something that verifies the happy path of your code, to the point where you know the core logic is working. They’re not supposed to test every single edge case, but rather keep the big green button saying this shit didn’t break.
Significant refactorings have not caused me to curse my tests.
Then either all your applications are rainbows and unicorns, or we have a different definition of significant.
No, we’re on the same page of refactoring that span multiple classes. This app is far from rainbows and unicorns (it was one of the first OO codebases I wrote). But it’s certainly young: I’ve only been using this strategy a few months and it’s just one fairly small app (~5kloc, half tests). Maybe it will really bite me in a year or when the app doubles in size, I don’t know yet. But one stressor is that the app archives emails, so it is constantly dealing with really terrible, invalid input and having to make useful things out of it, and it’s done really well on that measure.
I’ve written up some of my recent experiences on my blog and I already follow your feed (I really liked your duplication in tests post), so let’s just keep coding and blogging and maybe we’ll figure it out. :)
Actually… and I say I’m following your blog, but now that I checked, the feed your page pointed me at http://blog.jakubarnold.cz/rss looks like it’s just serving your homepage, so I’m not getting updates.
I’ve fixed the url, it’s supposed to be http://blog.jakubarnold.cz/feed.xml
I’ll check out your blog :)
If you have 5000 lines of messy tangled code that you need to refactor,
… you should be reading “Working effectively with Legacy Code” by Feathers rather than POODR, as much as I love it.
I have that on my bookshelf, as well as Refactoring, Implementation Patterns, Growing OOS Guided by Tests, and other books on this topic, though they all seem to just a bit too idealistic.
Different strokes for different folks, I guess. I found “Legacy” to be one of the most practical books I’ve read.
Huh, I found WELC to be a really practical guide. I read it at a job whose first bullet point on my resume was “Wrote -110,000 lines of code”. :)
I’m not a big GOOS fan, but I got a lot out of Refactoring, Patterns of Enterprise Application Architectures, and Practical Object Oriented Design in Ruby.
Maybe part of it was being solo/on small teams. If I wanted to experiment with a design style or use new refactoring techniques, I could just do it.
In terms of committing tests with the feature/fix, I’m keen on the idea that we write a “failing test” before the fix (to avoid the failure mode where you write the test for your fix afterwards, see it pass and say “that’s all ok”).
What do people think about reflecting this in the commit history? Doing so makes a visible statement that you did this, which is good (and helps promote the practice on the team) but it also breaks CI until you push the fix. This might be a feature? (The codebase does actually have a bug, it is just that it wasn’t surfaced until now). But this might also be disruptive.
A perfect deployable master branch at every commit is over rated. It is perfectly acceptable to gate decisions based on tests or even human awareness.
having a perfectly deployable master branch when working with other people means that at any moment I can just start work on a new feature by taking master.
Perhaps you use some other marking to determine this, but then it’s the same principle, just with a different mark.
(Of course in practice that means that sometimes things get merged into the main branch that are busted, and this doesn’t hold. But if it’s exceptional it’s wasting way less time than if it’s the norm)
This, there’s so many cases where you just don’t have a nice way of solving a problem, and sacrificing the actual solution at the altar of an arbitrary process is not worth it imo.
In addition, you rarely find solutions to current problems in git history. Sometimes, yes, for consultation, but rarely will you actually be deploying that ancient commit.
Most often you want to know why the line of code was added, what problem it was fixing. Or you are doing a git bisect, and a failing build might interfere with that, but most often you really just need sufficient context for the commit in the commit.
Anything else, perfect tests, perfect messages, perfect docs, is overkill, though can be nice.
When I’ve done this workflow (commit the failing test first) I do it in a branch, then publish a PR to GitHub (where I can watch CI fail on the PR page), then squash commit that to main later along with the implementation.
I just saw an interesting idea on Twitter: pytest offers an “xfail” mechanism which can mark a test as “expected to fail” such that it can fail while still leaving CI green. So you can use that to check in a known-to-fail test without breaking things like “git bisect” later on: https://twitter.com/maartenbreddels/status/1586609659464630273
You can, even should, write the test first, but it should go in the same commit as the fix, at least for published commits.