1. 16
  1.  

  2. 11

    The counter to this is that sometimes you need tools that facilitate how your company works. This can be around CI/CD tooling, or repo management, etc. These are things that can be specific to how your company operates and need to be specific in order to operate smoothly.

    1. [Comment removed by author]

      1. 3

        Build system is an interesting case. OpenBSD (to pick a publicly known example) is built with a whole bunch of Makefiles and some custom scripts. In aggregate, It amounts to a custom, internal tool. If we were a company and OpenBSD was our product, this means we should either sell this tool (to who?) or scrap it (and replace it with what?).

        1. 2

          The alternative to trying to sell it is to release your internal build system as open-source. But, of course, the only direct benefit to your company is to get negative feedback that you wouldn’t otherwise get. Wide adoption of a tool specialized to one particular company’s workflow is never going to happen, for all sorts of reasons.

          1. 2

            replace it with what?

            With the standard, of course.

            1. 3

              Yeah, that’s cute, but wrong. Autoconf doesn’t solve the same problem. An internal tool written in makefile.am and m4 is still an internal tool. And to what end? We’d end up with an auto generated makefile running the same scripts we run today.

              1. 1

                Well your tool should factor cleanly into two parts: a declarative config value that expresses the description of your project in a suitable domain language, and an interpreter that executes it (or in practice a matryoshka series of interpreters, where at each step the representation is further from the domain and closer to the machine operations). This is worth doing even if you don’t go any further; it makes everything safer (the domain language is much more constrained, many wrong programs that you could write in the low-level language are simply not expressible in the high-level language), more readable, and much more testable (you can test each interpreter as a pure function, and you can write alternative interpreters for the same values that allow you to test particular aspects of your build definitions). Then you can publish the interpreter, and the config is just a config (ideally not even expressed in a turing-complete language).

                Autoconf is a (very crude) implementation of some of this idea: your configure.ac should look like a declarative config file describing your project (things like “foo.c and bar.c are part of this program’s source”), while the autoconf infrastructure takes care of non-project-specific facts (“invoke tar to build archives”) and of turning the declaration into execution. In theory this should mean less incidental complexity (i.e. specifying implementation details, imperative descriptions of exactly what programs are run in what order) than a makefile. Up to a point the same is true of make: it’s meant to just describe the simple rules that are used to build your project. The problem is that a Makefile is still very close to a program rather than a config; it has operational semantics but very little in the way of denotational semantics (since most of a Makefile usually consists of calling arbitrary unix programs), you end up specifying a lot of incidental details, and it’s very easy to write a Makefile that will do absurd things.

                You could argue that a config file is still a program/tool, but I think the distinction is meaningful. Something that just describes a project (like a maven pom.xml, which often only consists of the project id/version, the ids/versions of any dependencies, and a few things like where the SCM is) is very different from a sequence of shell commands to execute. It’s possible to have that kind of clear separation and have both halves be written in make (make being something close to a first-class programming language), but I think in that case it is worth “selling” the common part of your make config - a friend did exactly that for a project we worked on, https://github.com/jbytheway/makeshift

                1. 1

                  Well your tool should factor cleanly into two parts: a declarative config value that expresses the description of your project in a suitable domain language, and an interpreter that executes it (or in practice a matryoshka series of interpreters, where at each step the representation is further from the domain and closer to the machine operations).

                  Bingo. Per-directory Makefiles, included bsd.*.mk and make(1) are three layers, from declarative to the interpreter.

                  This is worth doing even if you don’t go any further; it makes everything safer (the domain language is much more constrained, many wrong programs that you could write in the low-level language are simply not expressible in the high-level language), more readable, and much more testable (you can test each interpreter as a pure function, and you can write alternative interpreters for the same values that allow you to test particular aspects of your build definitions).

                  Autoconf is a (very crude) implementation of some of this idea:

                  Crude enough to render it more harmful than beneficial. Badly designed layers make things less testable. Autoconf, to me, seems less transparent than make -d.

                  It also makes things less testable just because typical configure.ac files are orders of magnitude longer than typical BSD Makefiles. Just look at this beauty.

                  Besides, Autoconf solves the wrong problem. OpenBSD is an operating system. They don’t need automatic detection of header files and OS features for their ls, they know where everything is.

                  1. 1

                    It also makes things less testable just because typical configure.ac files are orders of magnitude longer than typical BSD Makefiles. Just look at this beauty.

                    So yeah, to be clear I’d say that the infrastructure that allows that to work seems worth separating out and “selling” to other projects.

                    1. 1

                      That would be sweet to have, but this infrastructure fits an OS better than individual software packages.

                      But hey, if one had to ditribute a bunch of software, like lots of binaries and some libraries, as one piece for different operating systems, one could bundle a thing like that with it. And one did: remember xmkmf/imake? The thing X killed in favour of the “standard” mentioned upthread? Here’s a taste from two decades ago. [sheds a tear]

                      1. 1

                        Shrug. I remember debugging xmkmf when trying to build X. And I shudder to think what trying to cross-compile with it would have looked like. As a user, if your build system is a) reliable b) supports the same user interface as autoconf (at a minimum: ./configure --host, --build, standard names for well-known libraries in --enable/--disable, build for newer/rarer OSes (that were unreleased at the time of packaging) with nothing harder than replacing config.sub) then go nuts. But if not, I’ll trade quite a lot of configure.ac verbosity for the sake of those two things.

            2. 1

              I’d replace it with Maven, personally (though that reflects language views that I doubt you share). No doubt others would recommend CMake or the like. If you really have built a build tool that’s better than the existing options then any number of large organizations (Google, Twitter, …) would like to buy it.

        2. 7

          I just want to point out few exception: if your internal tool provides you with a competitive advantage in your market. Think ultra-fast trading companies. Or in case of an hotel company, the general management software (rooms,turns,bookings). Or the google search algorithms.

          1. 5

            I find there’s a limit to how much you can learn from a toy; when learning there’s no substitute for making something that’s actually going to be useful to someone else. So I would always spend downtime trying to make something useful.

            In some hypercapitalist utopia where there was zero friction to selling things I’d want to sell every random useful script, but the whole point of the firm is that its internal economy has lower transaction costs than the real economy. So it shouldn’t be surprising that some tools will be valuable enough to sell internally, but not valuable enough to sell externally.

            1. 4

              I’m not sure I’d counsel selling a tool that is particularly important to your business. The post here argues that if it’s any good, you’re leaving money on the table by not letting it flourish as its own product outside. But if it’s important to your business, selling it transfers significant power over your future business operations to an external entity, whose goals may or may not be aligned with yours.

              Companies often try to do the opposite: if a tool that’s important to you started out external, maybe just buy it (or buy the company that makes it) to ensure you have full control over your toolchain. Sometimes this ends up being done on short notice because the other company goes bankrupt, so you either have to buy them, or buy their assets, to keep your tools from disappearing. Or if you were prudent enough to negotiate a source-escrow clause, you have to exercise it and take over maintenance of the tool internally, which amounts to buying it (but for $0). Which often ends up more disruptive than if you had just had the tool in-house to begin with.

              1. 4

                This is why programmers should not veer into the domain of ‘business’ without careful thought.

                It’s a partially useful set of rules for assessing internal project viability, but the ‘sell it’ thing is completely missing the fact that a product needs a business around it, and this is the primary consideration for launching a new product.

                1. 2

                  Our open source product makes these kind of internal apps very quick to develop:

                  http://haplo.org/

                  For something like an internal linkedin, configure the semantic web inspired object store to describe the entities involved, then add a bit of scripting to add any business specific policy or features.

                  One of my aims in building it was to bring down the cost of custom applications for internal business use by a couple of orders of magnitude. I think we’ve got a long way towards this goal.

                  1. 2

                    For something like an internal linkedin, configure the semantic web inspired object store to describe the entities involved, then add a bit of scripting to add any business specific policy or features.

                    I think you end up doing something just as complicated as coding. And I think the claim that a simple rails app takes a week to get going is an order of magnitude out.

                    1. 3

                      I agree. These “rapid development” tools can be more pain than they’re worth.

                      What we’ve done is:

                      • Focus on one specific area, “information applications”, which collect together things representing the real world.

                      • Provide a full admin UI for the schema, which describes what you want to collect.

                      • Provide a full user UI for that information system, including editing, search, discovery, etc.

                      • Do all the basics things you need, like user accounts, permissions, notifications, workflow systems, etc.

                      • Easy hand-off from information configuration to developers.

                      • API designed to script these features, rather than a generic application framework.

                      • Don’t try and do point and clicky tools, make it easy to script a toolbox of components.

                      Within our sweet spot, we have built applications in less than one day which were genuinely useful and the organisation kept on using past the honeymoon phase.

                      If you step outside the sweet spot, of course it’s not so great, but it’s quite a big sweet spot. Every organisation has a load of “stuff” they need to keep track of and share, and have some lightweight workflows to help them work together.

                      Here’s a pretty fully featured repository application: https://github.com/haplo-org/haplo-research-manager