1. 34
  1.  

  2. 9

    I agree with the principle. I disagree with using databases, most of the time - in my experience people are too quick to sign up for all their downsides when they don’t actually need the things databases do better than the alternatives, and should consider the options of a key-value store, a flat file, or a little bit of code.

    It’s also very easy to become unreasonably biased against “code”, and use complex config - or worse, a “rules engine” or ESB - to do things that you should really just “hard” code.

    1. 10

      In my experience, waiting too long to use a database is more common than being too quick to do so. I’ve wasted so much time on clumsy application-level implementations of complex joins and query logic because somebody thought an RDBMS would be “too heavyweight.” To steal a line from another commenter: sometimes pages of code can save you half a line of SQL.

      1. 8

        I don’t know why more people don’t use SQLite in scenarios like this — it doesn’t have the fancy stuff like MySQL and Postgres do but for a serverless database it performs really well. I rarely use anything else now.

        1. 4

          I was going to comment “sometimes I’m in a business-enforced environment where I can’t have any extra dependencies” but after discovering Python includes SQLite natively since 2.5, I’ve had a paperbag-on-head moment for the first time in a few years.

          1. 2

            Honestly I think the main issue is that SQL is kinda hard to compose unless you’re writing DB-level stuff. It’s so much easier to grab something like pickle.

            SQL is by far the weakest link in SQLite IMO. Just the syntax and difficulty to compose compared to the host languages for a lot of bits.

      2. 9

        I wholeheartedly agree with the “code romance” section of this piece, and that is something I’ve fallen victim to several times—especially early in my career. I saw it summed up thus recently: (unfortunately I cannot remember where, or I’d link to it)

        Weeks of coding can save you hours of planning.

        1. 5

          This has been the opposite of my experience in my career. Weeks of planning are a good way to turn hours of coding into months of coding. Whereas if you just start writing code, a better design than anything you could have come up ahead of time will usually fall out.

          1. 20

            Whereas if you just start writing code, a better design than anything you could have come up ahead of time will usually fall out.

            I disagree heavily with this, unless you’ve got a pretty clear idea of what you need to do–especially in business systems.

            If you want to make the argument that you should prototype a system, see where it sucks, and iterate, I would agree with you. Unfortunately, most folks don’t prototype and instead rush their janky “yolo MLG 420 blazeit” little thing into production and then never allocate proper engineering time to fixing it.

            I’ve worked with a codebase where, had the previous team sat down and actually modeled the domain at all, they’d’ve avoided creating the staggering amount of technical debt that then happened.

            And the problem with “fuck design just code” is that when you bring on new people, they are at a tremendous disadvantage. Code itself is probably the worst-possible way of understanding why a system does what it does. So, under pressure to deliver and without a good understanding (because no overarching design is written down, remember?) they too forgo upfront planning and just start writing code.

            It’s a vicious cycle of poverty and violence.

            1. 4

              I’d agree with you more a few years earlier when I used to call myself an OO programmer. The trouble is, IMHO, that OOP is very incompatible with explorative programming. I’ve experienced first-hand again and again that functional code is much more resistant to that chaos.

              In OOP, everything lives in a complex tower of class and interface hierarchies, so the code you write depends heavily on the design. In FP though, you define relations between the concepts of your problem domain, and architectural swings just become rearranging things.

              1. 1

                If you want to make the argument that you should prototype a system, see where it sucks, and iterate, I would agree with you. Unfortunately, most folks don’t prototype and instead rush their janky “yolo MLG 420 blazeit” little thing into production and then never allocate proper engineering time to fixing it.

                I think even an explicit prototype is a wasteful artificial split. Continuously improve your codebase as necessary. Put your janky little thing into prodction if doing so produces some value for you. As and when the jank causes you trouble, improve it, to the extent that’s justified by the value involved.

                Code itself is probably the worst-possible way of understanding why a system does what it does.

                In my experience good code is the best-possible way - it’s written in the language of the domain, but linked to the implementation, so it can’t get out of date the way non-code documentation tends to. If the code is difficult to understand or difficult to explain, that sounds like a reason to spend some time improving it.

                So, under pressure to deliver and without a good understanding (because no overarching design is written down, remember?) they too forgo upfront planning and just start writing code.

                Good, because that tends to work out better.

                1. 6

                  I have to ask–what systems are you thinking of that have validated your approach here?

                  I am incredibly skeptical, and would be interested in data to expand my worldview.

                  Like, academic code for scientific computing (a friend of mine does this) would seem to be a huuuuge contradiction to your “tends to work out better” claim.

                  1. 1

                    I have to ask–what systems are you thinking of that have validated your approach here?

                    My own professional experience - a number of startups (one of which failed quite dramatically in a way that I attribute almost entirely to excessive design) and a couple of larger corporates. I’m not going to be too specific.

                    Like, academic code for scientific computing (a friend of mine does this) would seem to be a huuuuge contradiction to your “tends to work out better” claim.

                    Maybe. I don’t agree with anything there http://yosefk.com/blog/why-bad-scientific-code-beats-code-following-best-practices.html , but I think there’s a kernel of truth there: overengineering is the worst problem to have in a (working) codebase (in the sense that it’s the most expensive to fix), and overdesign is very likely to lead to that.

              2. 3

                I agree that the kind of ‘planning’ you get from an certified agile coach is an excellent way to turn hours into months.

                I suspect the core disagreement here stems from the difference between planning the data model and planning the code.

                Sitting down and writing a reasonably complete description of the kinds of data you’re dealing with is usually a pretty short exercise which saves considerable time.

                1. 1

                  Planning the data model is something I’ve seen go massively wrong. Often what you do will tell you what the data model should be, far better than any planning exercise would. http://wiki.c2.com/?WhatIsAnAdvancer is a good description of the kind of positive experience that up-front planning eliminates.

                  1. 1

                    That’s a description of planning a set of classes and responsibilities. Unsurprisingly, it didn’t work.

                    Trying to build a trading system without plain English definitions for terms like “Portfolio”, “Position” or “Schedule” results in you learning those definitions when you deliver something that doesn’t work (hence the emphasis on ‘deliver early’ in most agile processes - you’ll learn about your misunderstanding earlier).

                    The core act of planning the data model, IMO, is coming up with a single definition for each of those terms - no names with multiple meanings (eg contract schedule / calendar schedule) or ambiguous language.

                    It’s also helpful to have names for the relationships between things (CalendarEvent <-> Recurrence <-> Schedule) if you’ll need to communicate about the relationships as well as the things.

                    1. 1

                      I agree there’s value in coming up with good names for things, but I find writing code helps with that - in particular for the relationship example, which relationships should become first-class things is something that becomes more readily apparent when you start actually writing code than it is when you’re planning ahead. Likewise if the same name seems to mean two or more subtly different things, that’s also brought out in the act of actually writing code with it.

                2. 1

                  Is it bad experiences in your past that have led you to think ‘planning results in overengineering/scope creep’? If so, I emphathize: I have been on such projects, too. But it doesn’t have to be this way. Here are some planning activities other than planning OO classes in ever-greater detail:

                  • Shrink the project instead of growing it. What’s the minimal useful feature set?
                  • Build a roadmap: a sensible order in which to add features, both leading up to the minimal version and going beyond it
                  • Spend a few minutes thinking about the interface design.
                  • Identify the concepts and constraints of the problem’s domain, the better to match them in your code
                  • Think about the simplest way to implement those concepts while merting the constraints – simplifying a design is an excellent way to reduce work.

                  You can do as many or as few of these as you like. In my experience, all of them are brief time well spent.

                  You know that refactoring aphorism “make the change easy, then make the easy change”? Planning – I should really say preparing – can feel the same way. Make the steps obvious, then take the obvios steps.

                  1. 1

                    Shrink the project instead of growing it. What’s the minimal useful feature set? Build a roadmap: a sensible order in which to add features, both leading up to the minimal version and going beyond it

                    You need to figure out which feature is most important right now, and then implement it, sure, but that’s a very minimal form of planning. One of the great successes I’ve seen of the agile meta-process is a place where we realised we were maintaining a 3-month roadmap that wasn’t providing us with any value, and replaced it first with a 1-month roadmap, and then with just collecting requests from the business at the start of every 2-week iteration.

                    These days for a lot of products the minimum version - the version that can get you up and running as a business and iterating towards product/market fit - really should be implementable in 2 weeks, even if that 2-week version is just a web form connected to an emailer.

                    I guess I don’t quite advocate doing absolutely none of this kind of planning, but I’d say I get pretty close to it.

                    Spend a few minutes thinking about the interface design.

                    I find you think more clearly - and uncover more constraints - if you start by making some kind of interface rather than just thinking about it. And I’d prefer to have the tool correspond to actual implementation rather than have to do the same thing twice - I’m a fan of something like Qt Designer where you can make a “mockup” that then becomes the real implementation.

                    Identify the concepts and constraints of the problem’s domain, the better to match them in your code Think about the simplest way to implement those concepts while merting the constraints – simplifying a design is an excellent way to reduce work.

                    Again, I find actual code helps clarify my thinking, and reveal more potential constraints. Sometimes this means building a model, sometimes it means writing out test cases, but I’d always want to have the rigour of actual code when doing any of those activities, and at that point it feels more like implementation than planning, and I’d generally expect the code I’d write to remain in the codebase (even if severely refactored over time).

              3. 6

                This line of thinking only really applies to what can largely be considered solved problems: low-mid data storage and reporting, text boxes over data websites and the like. I personally like to ‘cheat’ quite often, via spreadsheets, one-liners, regex and the like, but a lot of that falls into one-offs. These are handy as someone that has more ideas than time.

                In well known spaces, organization and finishing is a more important skill than cleverness or lines of output. Getting an organizational setup for a piece of software is a difficult skill in its own right, but doesn’t seem like the pop-cultural idea of “programming” But outside of well known spaces, things get more interesting. The edges have some interesting stuff.

                The edges of performance such as AAA video games, at least until they shifted from innovating lots of new tech to being more like entertainment productions, compilers (which always need to be faster) and servers/ops at places like Google/Microsoft/Facebook/Amazon change the economics of optimizing code and building things in house, like custom versions of programming languages with different focuses.

                The edges of minimalism, in environments like red-lang.org (red produces very small executables compared to a lot of languages other than C these days), FORTHs and the demoscene leads to interesting but different tradeoffs, like very heavy procedural generation some that border on madness (especially in the 2k and 64k demos), at least from a more “normal” perspective.

                And then you have embedded systems, flight control software and RTOS type stuff which needs to perform well, not do too much, and be consistent.

                One thing that separates these type of edges from the middle is how carefully you have to budget your computational resources (executable size, memory, disk space, etc) . In the middle it makes more sense to trade off computing power for programming time, but at the edges this runs in reverse.

                (note: this list of examples is by no means exhaustive)

                1. 3

                  Trying to put this more directly, it sounds like he or she is trying to say: “wake up and get out of your ivory tower”; this post is more so a reaction to the kinds of academic training that new graduates try to apply to their work. I think they usually give up that kind of thinking anyway; they eventually get scoffed at.

                  I’m all for getting results, but I find this attitude to be too heavy. I know the author is trying to drive home a point, but it’s the case that developers write code, day-to-day, for reasons that only in a circuitous way contribute to their salary. Is that so secondary to getting results that it’s not worth mentioning? Or is it better to keep that a secret?

                  1. 5

                    The post has this weird, business-y subordinate tone to it, as if programming is mostly what you can get away with shipping to production. It is incredibly reductionistic and dull. If our lot in life as programmers is to write glue code to whatever the Internet deems as The Best Library, then I’m done with programming. There is room for a healthy appreciation for making things that work well when stressed and shipping regularly.

                    I take pride in my work, and that means doing things as best I can. At the end of every day, I at least have the satisfaction of knowing that.

                  2. 3

                    This is a tricky essay, because it is partially true.

                    The true part is being willing to kill as much of your code as you can. Fondness for code blinds you from seeing the truth of what the design demands.

                    The part that is false is the belief in code reuse. It has not solved the Software Problem; I’d argue it exacerbates it to the point that software development turns into systems integration, which is ugly and boring as sin. It can make software development easier, but I believe we’re still collectively learning the value of composition in the face of rapidly changing requirements from both stakeholders and evolving dependencies.

                    There’s also the thing where people mistake productivity for trying out eight different template libraries when they could have written s.replace("%TITLE%", title) and moved on.

                    1. 2

                      It’s kind of sad that developers have such a romance with code that we get tunnel vision as to what code should look like aesthetically, how a pattern should be implemented & what framework to use, all the while having only a naive understanding of the actual problem they are trying to solve. It seems like this is reinforced by the idea that a developer is “done” once the CI system is passing and a PR has been merged. I would like to say that the real problems are defined outside of code and therefore the only measures of a developer finishing their job are also defined outside of code. I’m always glad to see articles like this, which say things like

                      The job of a software developer isn’t to write code. It’s to create solutions that create value

                      1. 2

                        That is true in many cases. Problem is that dependencies are not a free lunch. For most projects this doesn’t matter. AFAICT most projects are used locally or never shipped at all. After they are considered feature complete no one cares. Few bugs will get fixed and dependencies probably will never be updated. Most of the time nothing will happen. Other times your credit card number will leak. It is a problem with this industry (not that other industries are much better).

                        If we will blow up this planet, because of the global warming, it will be because we priced oil with no foresight. Maybe reaching the limits of Moore law will stop gracious bloat? Or we will suffocate.

                        Sometimes it is good to use a library. But sometimes you can end up as with leftpad fiasco. Sometimes linear search will be faster to write and faster to execute (smaller N which on modern hardware can be pretty big). Now linear search could be described as cheating,

                        There is also a circle of life. There is a successful thing that adds more features and never cuts down anything. At one point environment will change and there will be a new thing. This new thing will start from a clean state and will be faster, simpler and leaner. After some time it will add more features…

                        As always: there is no silver bullet.

                        1. 1

                          Maybe reaching the limits of Moore law will stop gracious bloat

                          Hasn’t it already? The attitude that “we can just wait for CPUs to get faster for this” feels like it has gone away, at least concerning performance problems that can’t be addressed by adding cores because the problem structure doesn’t admit parallel implementations.

                        2. 1

                          I think the concept of “10x developers” is largely about just knowing what exists out there to use and how to use it.

                          This is wrong. Although “10x” is somewhat of a stupid phrase (it’s crass to attempt to represent a programmer’s value in business terms) and I’ve stopped using it, that’s not what causes the “10x” phenomenon to exist. Programmers who can write glue code, figure out third-party APIs, and use off-the-shelf assets are a dime a dozen. Programmers who could actually write new technical assets from scratch are not. It’s true that a good programmer will usually prefer off-the-shelf assets for capabilities that are off the critical path, and will use Stack Overflow instead of trying to reverse engineer something, but the problem is that commodity Scrum programmers literally can’t do anything else but that.

                          For the past decade I’ve been a “software developer”. But what I really do is develop solutions, which often happen to involve writing some code, but sometimes it doesn’t.

                          That’s fair, but now that I’m in my 30s, I also think it’s critically important to know which problems are worth solving at all: strategy as much as tactics. Unfortunately, if your title is “software engineer”, the general assumption of the business people is that you have absolutely no competence at a strategic level, which I think is a big part of why competent people do as much as they move on to literally anything other than corporate software engineering before they hit 40. (This is not to say that there aren’t competent people over 40 who are still in corporate software. There are. It’s harder to get out than it looks, and the factors that can keep people stuck usually have very little to do with personal merit.) Most of the problems that we encounter in the business world are extremely easy to solve. What makes them hard is resource constraints (penny-pinching philistines have too much power) and political instability. The problem is that 95 percent of the problems we encounter aren’t worth solving because they won’t do anything to advance our needs. If you’re a small company and you produce a brilliant product that no one wants, you die. If you’re a team trying to gain credibility in a large organization but you don’t have management buy-in, it doesn’t matter that the technical work was exceptional.

                          In the research world, the problems themselves are difficult but there’s usually a reward for an effective solution. If nothing else, you get a paper published and you get credit for advancing the field. In the business world, the problems are easy as shit (modulo ugly people problems, like understaffing and political instability) but the reward profile is spotty and most of them just aren’t worth spending time on, whether from a corporate/managerial or an employee-level perspective. The skill that will distinguish you in the business world is figuring out ahead of time which problems actually are worth solving. Of course, if you’re an employee and you’re staffed on a problem that isn’t, there’s not much that you can do about it.

                          1. 1

                            The attitude this piece espouses is reasonable, but one of its points–“Being a software developer involves writing very few lines of code, and the code you do write is usually just to glue things together”–is not. Software development is a very young field. There are multitudes of unsolved problems, to say nothing of the problems we aren’t even aware of yet, and many (if not most) of the problems that are solved are solved poorly, which may or may not be an issue for any given application.

                            It’s conceivably possible to have a career in which you do nothing original, and in which every library you find is good enough as-is, but in no way is that the essence of software development.